sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
5d3085f2129139abc10d2b58becd4d4f2978e5d5
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-french
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:39:16+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["fr"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "French HateCheck"}
2022-07-05T09:40:23+00:00
[ "2206.09917" ]
[ "fr" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ 107, 11, 191, 568 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-French #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n# Dataset Card for Multilingual HateCheck## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL" ]
d622427417d37a8d74e110e6289bc29af4ba4056
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-dutch
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:nl", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:40:49+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["nl"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Dutch HateCheck"}
2022-07-05T09:41:31+00:00
[ "2206.09917" ]
[ "nl" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ 107, 11, 191, 568 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n# Dataset Card for Multilingual HateCheck## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL" ]
65eb7455a05cb77b3ae0c69d444569a8eee54628
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** [email protected] ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
Paul/hatecheck-arabic
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:cc-by-4.0", "arxiv:2206.09917", "region:us" ]
2022-07-05T09:42:16+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["ar"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Arabic HateCheck"}
2022-07-05T09:43:02+00:00
[ "2206.09917" ]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-4.0 #arxiv-2206.09917 #region-us
# Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL - Repository: URL - Point of Contact: paul@URL ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. mhc_case_id The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") functionality The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. test_case The test case text. label_gold The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. target_ident Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. ref_case_id For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. ref_templ_id The equivalent to ref_case_id, but for template IDs. templ_id The ID of the template from which the test case was generated. case_templ The template from which the test case was generated (where applicable). gender_male and gender_female For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. label_annotated A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). label_annotated_maj The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. disagreement_in_case True if label_annotated_maj does not match label_gold for the entry. disagreement_in_template True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
[ "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n", "# Dataset Card for Multilingual HateCheck", "## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL", "## Dataset Structure\n\nThe csv format mostly matches the original HateCheck data, with some adjustments for specific languages.\n\nmhc_case_id\nThe test case ID that is unique to each test case across languages (e.g., \"mandarin-1305\")\n\nfunctionality\nThe shorthand for the functionality tested by the test case (e.g, \"target_obj_nh\"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.\n\ntest_case\nThe test case text.\n\nlabel_gold\nThe gold standard label (\"hateful\" or \"non-hateful\") of the test case. All test cases within a given functionality have the same gold standard label.\n\ntarget_ident\nWhere applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.\n\nref_case_id\nFor hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.\n\nref_templ_id\nThe equivalent to ref_case_id, but for template IDs.\n\ntempl_id\nThe ID of the template from which the test case was generated.\n\ncase_templ\nThe template from which the test case was generated (where applicable).\n\ngender_male and gender_female\nFor gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.\n\nlabel_annotated\nA list of labels given by the three annotators who reviewed the test case (e.g., \"['hateful', 'hateful', 'hateful']\").\n\nlabel_annotated_maj\nThe majority vote of the three annotators (e.g., \"hateful\"). In some cases this differs from the gold label given by our language experts.\n\ndisagreement_in_case\nTrue if label_annotated_maj does not match label_gold for the entry.\n\ndisagreement_in_template\nTrue if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC." ]
[ 106, 11, 191, 568 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-4.0 #arxiv-2206.09917 #region-us \n# Dataset Card for Multilingual HateCheck## Dataset Description\n\nMultilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.\nFor each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.\nThis allows for targeted diagnostic insights into model performance.\n\nFor more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!\n- Paper: Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. URL\n- Repository: URL\n- Point of Contact: paul@URL" ]
359c84c6ccd10e11ff3e537715218cea070b4281
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: tals/albert-base-vitaminc * Dataset: tals/vitaminc To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SalimHF](https://huggingface.co/SalimHF) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-b95cefb8-9695269
[ "autotrain", "evaluation", "region:us" ]
2022-07-05T10:28:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["tals/vitaminc"], "eval_info": {"task": "multi_class_classification", "model": "tals/albert-base-vitaminc", "metrics": [], "dataset_name": "tals/vitaminc", "dataset_config": "tals--vitaminc", "dataset_split": "test", "col_mapping": {"text": "claim", "target": "label"}}}
2022-07-05T10:34:06+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: tals/albert-base-vitaminc * Dataset: tals/vitaminc To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @SalimHF for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: tals/albert-base-vitaminc\n* Dataset: tals/vitaminc\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @SalimHF for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: tals/albert-base-vitaminc\n* Dataset: tals/vitaminc\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @SalimHF for evaluating this model." ]
[ 13, 82, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: tals/albert-base-vitaminc\n* Dataset: tals/vitaminc\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SalimHF for evaluating this model." ]
40e0786f16bdcd0b2edd26caf6351229e387aa25
annotations_creators: - other language: - en language_creators: - machine-generated license: - unknown multilinguality: - monolingual pretty_name: FCD size_categories: [] source_datasets: [] task_categories: - feature-extraction task_ids: [] # Dataset Card for FCD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary FCD dataset ### Supported Tasks and Leaderboards NLP ### Languages en ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
sheikh/FCD_lmv2
[ "region:us" ]
2022-07-05T11:32:11+00:00
{}
2022-07-07T18:28:37+00:00
[]
[]
TAGS #region-us
annotations_creators: - other language: - en language_creators: - machine-generated license: - unknown multilinguality: - monolingual pretty_name: FCD size_categories: [] source_datasets: [] task_categories: - feature-extraction task_ids: [] # Dataset Card for FCD ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary FCD dataset ### Supported Tasks and Leaderboards NLP ### Languages en ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for FCD", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nFCD dataset", "### Supported Tasks and Leaderboards\n\nNLP", "### Languages\n\nen", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "# Dataset Card for FCD", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nFCD dataset", "### Supported Tasks and Leaderboards\n\nNLP", "### Languages\n\nen", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ 6, 7, 112, 24, 10, 12, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for FCD## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nFCD dataset### Supported Tasks and Leaderboards\n\nNLP### Languages\n\nen## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information" ]
781353c2c8eafa4d734fa33cb48fee8f50253703
See https://arxiv.org/abs/2108.05289
vesteinn/icelandic-parallel-abstracts-corpus-IPAC
[ "license:other", "arxiv:2108.05289", "region:us" ]
2022-07-05T13:21:42+00:00
{"license": "other"}
2022-07-05T14:24:33+00:00
[ "2108.05289" ]
[]
TAGS #license-other #arxiv-2108.05289 #region-us
See URL
[]
[ "TAGS\n#license-other #arxiv-2108.05289 #region-us \n" ]
[ 19 ]
[ "passage: TAGS\n#license-other #arxiv-2108.05289 #region-us \n" ]
84087ff790a60b7e361d86c6acd9a558f07a1244
# Dataset Card for FigLang2022SharedTask ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://figlang2022sharedtask.github.io/ - **Repository:** - **Paper:** TBA - **Point of Contact:** [email protected] ### Dataset Summary Model in the loop approach for fig lang generation and explainability ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information TBA ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
ColumbiaNLP/FLUTE
[ "task_categories:text-classification", "task_categories:text2text-generation", "task_ids:natural-language-inference", "task_ids:explanation-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "language_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:afl-3.0", "region:us" ]
2022-07-05T13:38:38+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated", "machine-generated", "crowdsourced"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification", "text2text-generation"], "task_ids": ["natural-language-inference", "explanation-generation"], "pretty_name": "FLUTE"}
2022-10-07T17:28:02+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text2text-generation #task_ids-natural-language-inference #task_ids-explanation-generation #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #region-us
# Dataset Card for FigLang2022SharedTask ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: TBA - Point of Contact: URL@URL ### Dataset Summary Model in the loop approach for fig lang generation and explainability ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information TBA ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for FigLang2022SharedTask", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: TBA\n- Point of Contact: URL@URL", "### Dataset Summary\n\nModel in the loop approach for fig lang generation and explainability", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nTBA", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text2text-generation #task_ids-natural-language-inference #task_ids-explanation-generation #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #region-us \n", "# Dataset Card for FigLang2022SharedTask", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: TBA\n- Point of Contact: URL@URL", "### Dataset Summary\n\nModel in the loop approach for fig lang generation and explainability", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nTBA", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ 141, 13, 113, 26, 19, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 8, 19 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text2text-generation #task_ids-natural-language-inference #task_ids-explanation-generation #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #region-us \n# Dataset Card for FigLang2022SharedTask## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: TBA\n- Point of Contact: URL@URL### Dataset Summary\n\nModel in the loop approach for fig lang generation and explainability## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\n\n\n\n\nTBA### Contributions\n\nThanks to @github-username for adding this dataset." ]
17487444fd4af0e275d41e60c1134aa7e530e54f
How to use it: ``` from datasets import load_dataset remote_dataset = load_dataset("VanessaSchenkel/translation-en-pt", field="data") remote_dataset ``` Output: ``` DatasetDict({ train: Dataset({ features: ['id', 'translation'], num_rows: 260482 }) }) ``` Exemple: ``` remote_dataset["train"][5] ``` Output: ``` {'id': '5', 'translation': {'english': 'I have to go to sleep.', 'portuguese': 'Tenho de dormir.'}} ```
VanessaSchenkel/translation-en-pt
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:pt", "license:afl-3.0", "region:us" ]
2022-07-05T23:29:28+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "pt"], "license": ["afl-3.0"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "VanessaSchenkel/translation-en-pt", "tags": []}
2022-08-06T20:52:26+00:00
[]
[ "en", "pt" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Portuguese #license-afl-3.0 #region-us
How to use it: Output: Exemple: Output:
[]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Portuguese #license-afl-3.0 #region-us \n" ]
[ 78 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-100K<n<1M #source_datasets-original #language-English #language-Portuguese #license-afl-3.0 #region-us \n" ]
4c1ecb9097f926272fc14d84300b55d2ee36272d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@None](https://huggingface.co/None) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d6eb1223-1416
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T02:22:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-03T03:57:41+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @None for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @None for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @None for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @None for evaluating this model." ]
065bc9a2be3c1563eecd503b8a1959649dedfe30
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: lewtun/autotrain-acronym-identification-7324788 * Dataset: acronym_identification To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Felixver](https://huggingface.co/Felixver) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ea06fa04-9805305
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T04:59:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["acronym_identification"], "eval_info": {"task": "entity_extraction", "model": "lewtun/autotrain-acronym-identification-7324788", "metrics": ["bleu"], "dataset_name": "acronym_identification", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "labels"}}}
2022-07-06T05:00:18+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: lewtun/autotrain-acronym-identification-7324788 * Dataset: acronym_identification To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Felixver for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: lewtun/autotrain-acronym-identification-7324788\n* Dataset: acronym_identification\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Felixver for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: lewtun/autotrain-acronym-identification-7324788\n* Dataset: acronym_identification\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Felixver for evaluating this model." ]
[ 13, 87, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: lewtun/autotrain-acronym-identification-7324788\n* Dataset: acronym_identification\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Felixver for evaluating this model." ]
299ed66c5872c7df638b79e9aafb6773f0cc436f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: AlexanderPeter/bert-finetuned-ner * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@test](https://huggingface.co/test) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f0d30a26-9815307
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T06:20:46+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "AlexanderPeter/bert-finetuned-ner", "metrics": ["bleu"], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-07-06T06:22:08+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: AlexanderPeter/bert-finetuned-ner * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @test for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: AlexanderPeter/bert-finetuned-ner\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @test for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: AlexanderPeter/bert-finetuned-ner\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @test for evaluating this model." ]
[ 13, 78, 14 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: AlexanderPeter/bert-finetuned-ner\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @test for evaluating this model." ]
6ae881fe2106fbfb535237d40cb10814b8657e4e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: ArBert/roberta-base-finetuned-ner-kmeans * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@test](https://huggingface.co/test) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f0d30a26-9815308
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T06:20:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "ArBert/roberta-base-finetuned-ner-kmeans", "metrics": ["bleu"], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-07-06T06:22:15+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: ArBert/roberta-base-finetuned-ner-kmeans * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @test for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: ArBert/roberta-base-finetuned-ner-kmeans\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @test for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: ArBert/roberta-base-finetuned-ner-kmeans\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @test for evaluating this model." ]
[ 13, 86, 14 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: ArBert/roberta-base-finetuned-ner-kmeans\n* Dataset: conll2003\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @test for evaluating this model." ]
dad99cec08a535070995cf540a061452d45b21ec
## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Sample dataset for the Library of Congress Historical American Buildings collection. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields `call_number` `control_number` `created` `created_published` `created_published_date` `creators` `date` `display_offsite` `id` `link` `medium_brief` `mediums` `modified` `notes` `part_of` `part_of_group` `place` `latitude` `link` `longitude` `title` `repository` `resource_links` `rights_advisory` `rights_information` `service_low` `service_medium` `source_created` `source_modified` `subject_headings` `thumb_gallery` `title` ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information cc0-1.0 ### Citation Information [More Information Needed]
Arachnid/loc_building_test
[ "region:us" ]
2022-07-06T06:34:10+00:00
{}
2022-10-06T03:09:46+00:00
[]
[]
TAGS #region-us
## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Sample dataset for the Library of Congress Historical American Buildings collection. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields 'call_number' 'control_number' 'created' 'created_published' 'created_published_date' 'creators' 'date' 'display_offsite' 'id' 'link' 'medium_brief' 'mediums' 'modified' 'notes' 'part_of' 'part_of_group' 'place' 'latitude' 'link' 'longitude' 'title' 'repository' 'resource_links' 'rights_advisory' 'rights_information' 'service_low' 'service_medium' 'source_created' 'source_modified' 'subject_headings' 'thumb_gallery' 'title' ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ## Additional Information ### Dataset Curators ### Licensing Information cc0-1.0
[ "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nSample dataset for the Library of Congress Historical American Buildings collection.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n'call_number'\n'control_number'\n'created'\n'created_published'\n'created_published_date'\n'creators'\n'date'\n'display_offsite'\n'id'\n'link'\n'medium_brief'\n'mediums'\n'modified'\n'notes'\n'part_of'\n'part_of_group'\n'place'\n'latitude'\n'link'\n'longitude'\n'title'\n'repository'\n'resource_links'\n'rights_advisory'\n'rights_information'\n'service_low'\n'service_medium'\n'source_created'\n'source_modified'\n'subject_headings'\n'thumb_gallery'\n'title'", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\ncc0-1.0" ]
[ "TAGS\n#region-us \n", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nSample dataset for the Library of Congress Historical American Buildings collection.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n'call_number'\n'control_number'\n'created'\n'created_published'\n'created_published_date'\n'creators'\n'date'\n'display_offsite'\n'id'\n'link'\n'medium_brief'\n'mediums'\n'modified'\n'notes'\n'part_of'\n'part_of_group'\n'place'\n'latitude'\n'link'\n'longitude'\n'title'\n'repository'\n'resource_links'\n'rights_advisory'\n'rights_information'\n'service_low'\n'service_medium'\n'source_created'\n'source_modified'\n'subject_headings'\n'thumb_gallery'\n'title'", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\ncc0-1.0" ]
[ 6, 24, 22, 10, 5, 6, 6, 171, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 5, 6, 11 ]
[ "passage: TAGS\n#region-us \n## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nSample dataset for the Library of Congress Historical American Buildings collection.### Supported Tasks and Leaderboards### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields\n\n'call_number'\n'control_number'\n'created'\n'created_published'\n'created_published_date'\n'creators'\n'date'\n'display_offsite'\n'id'\n'link'\n'medium_brief'\n'mediums'\n'modified'\n'notes'\n'part_of'\n'part_of_group'\n'place'\n'latitude'\n'link'\n'longitude'\n'title'\n'repository'\n'resource_links'\n'rights_advisory'\n'rights_information'\n'service_low'\n'service_medium'\n'source_created'\n'source_modified'\n'subject_headings'\n'thumb_gallery'\n'title'### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data## Additional Information### Dataset Curators### Licensing Information\n\ncc0-1.0" ]
ced7a40ed6741ebeadbc8b8f8215b4afaf3cbd8b
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ms-en Notebooks to gather the dataset at https://github.com/huseinzol05/malay-dataset/tree/master/translation/malay-english Preparation notebook at https://github.com/huseinzol05/malaya/blob/master/session/translation/ms-en/download-prepare.ipynb
mesolitica/ms-en
[ "generated_from_keras_callback", "region:us" ]
2022-07-06T06:42:31+00:00
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "t5-tiny-finetuned-noisy-ms-en", "results": []}]}
2022-07-23T06:52:20+00:00
[]
[]
TAGS #generated_from_keras_callback #region-us
# ms-en Notebooks to gather the dataset at URL Preparation notebook at URL
[ "# ms-en\n\nNotebooks to gather the dataset at URL\n\nPreparation notebook at URL" ]
[ "TAGS\n#generated_from_keras_callback #region-us \n", "# ms-en\n\nNotebooks to gather the dataset at URL\n\nPreparation notebook at URL" ]
[ 17, 19 ]
[ "passage: TAGS\n#generated_from_keras_callback #region-us \n# ms-en\n\nNotebooks to gather the dataset at URL\n\nPreparation notebook at URL" ]
a56a81adc83587324e457eb5670d1a2d30376e8b
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # en-ms Notebooks to gather the dataset at https://github.com/huseinzol05/malay-dataset/tree/master/translation/malay-english Preparation notebook at https://github.com/huseinzol05/malaya/blob/master/session/translation/en-ms/download-prepare.ipynb
mesolitica/en-ms
[ "generated_from_keras_callback", "region:us" ]
2022-07-06T06:44:49+00:00
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "t5-tiny-finetuned-noisy-ms-en", "results": []}]}
2022-07-23T08:48:16+00:00
[]
[]
TAGS #generated_from_keras_callback #region-us
# en-ms Notebooks to gather the dataset at URL Preparation notebook at URL
[ "# en-ms\n\nNotebooks to gather the dataset at URL\n\nPreparation notebook at URL" ]
[ "TAGS\n#generated_from_keras_callback #region-us \n", "# en-ms\n\nNotebooks to gather the dataset at URL\n\nPreparation notebook at URL" ]
[ 17, 19 ]
[ "passage: TAGS\n#generated_from_keras_callback #region-us \n# en-ms\n\nNotebooks to gather the dataset at URL\n\nPreparation notebook at URL" ]
718e47758522bdcb842689584f767d751f47ecf6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d7e89585-9845311
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T08:29:55+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-06T08:35:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 77, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
a33a8e0a0cd4dc75b8034e353f8a4b143d9f02d1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/distilbert-base-uncased-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-f650c475-9895316
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T09:41:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/distilbert-base-uncased-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-06T09:42:11+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/distilbert-base-uncased-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @bhadresh-savani for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/distilbert-base-uncased-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/distilbert-base-uncased-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
[ 13, 87, 19 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/distilbert-base-uncased-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
7b5fb65a599e1c6810811bfc669327e1599a5344
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/roberta-base-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-b9c02377-9905317
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T09:41:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/roberta-base-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-06T09:42:27+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/roberta-base-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @bhadresh-savani for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/roberta-base-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/roberta-base-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
[ 13, 82, 19 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/roberta-base-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
284172791918d355e0d23ce574882ae0ab2d5175
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/bert-base-uncased-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-4742109c-9915318
[ "autotrain", "evaluation", "region:us" ]
2022-07-06T09:42:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/bert-base-uncased-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-06T09:43:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/bert-base-uncased-emotion * Dataset: emotion To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @bhadresh-savani for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/bert-base-uncased-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/bert-base-uncased-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
[ 13, 85, 19 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/bert-base-uncased-emotion\n* Dataset: emotion\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @bhadresh-savani for evaluating this model." ]
131d18ace1c171cfe3cb2b2a889678b614783580
# GitHub Jupyter Dataset ## Dataset Description This is a parsed and preprocessed version of [GitHub-Jupyter Dataset](https://huggingface.co/datasets/codeparrot/github-jupyter), a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells. ## Licenses Each example has the license of its associated repository. There are in total 15 licenses: ```python [ 'mit', 'apache-2.0', 'gpl-3.0', 'gpl-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-3.0', 'lgpl-2.1', 'bsd-2-clause', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'isc', 'artistic-2.0' ] ```
codeparrot/github-jupyter-parsed
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:muonolingual", "size_categories:unknown", "language:code", "license:other", "region:us" ]
2022-07-06T12:09:04+00:00
{"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["muonolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]}
2022-10-25T08:30:23+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-muonolingual #size_categories-unknown #language-code #license-other #region-us
# GitHub Jupyter Dataset ## Dataset Description This is a parsed and preprocessed version of GitHub-Jupyter Dataset, a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells. ## Licenses Each example has the license of its associated repository. There are in total 15 licenses:
[ "# GitHub Jupyter Dataset", "## Dataset Description\nThis is a parsed and preprocessed version of GitHub-Jupyter Dataset, a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells.", "## Licenses\nEach example has the license of its associated repository. There are in total 15 licenses:" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-muonolingual #size_categories-unknown #language-code #license-other #region-us \n", "# GitHub Jupyter Dataset", "## Dataset Description\nThis is a parsed and preprocessed version of GitHub-Jupyter Dataset, a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells.", "## Licenses\nEach example has the license of its associated repository. There are in total 15 licenses:" ]
[ 76, 9, 82, 23 ]
[ "passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-muonolingual #size_categories-unknown #language-code #license-other #region-us \n# GitHub Jupyter Dataset## Dataset Description\nThis is a parsed and preprocessed version of GitHub-Jupyter Dataset, a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells.## Licenses\nEach example has the license of its associated repository. There are in total 15 licenses:" ]
440f897b86f89278e310fc28e6f5d57df5fc3039
# Dataset Card for Natural Questions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 42981 MB - **Size of the generated dataset:** 139706 MB - **Total amount of disk used:** 182687 MB ### Dataset Summary The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 42981 MB - **Size of the generated dataset:** 139706 MB - **Total amount of disk used:** 182687 MB An example of 'train' looks as follows. ``` ``` ### Data Fields The data fields are the same among all splits. #### default ``` "id": datasets.Value("string"), "document": { "title": datasets.Value("string"), "url": datasets.Value("string"), "html": datasets.Value("string"), "tokens": datasets.features.Sequence( { "token": datasets.Value("string"), "is_html": datasets.Value("bool"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), } ), }, "question": { "text": datasets.Value("string"), "tokens": datasets.features.Sequence(datasets.Value("string")), }, "long_answer_candidates": datasets.features.Sequence( { "start_token": datasets.Value("int64"), "end_token": datasets.Value("int64"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), "top_level": datasets.Value("bool"), } ), "annotations": datasets.features.Sequence( { "id": datasets.Value("string"), "long_answer": { "start_token": datasets.Value("int64"), "end_token": datasets.Value("int64"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), "candidate_index": datasets.Value("int64") }, "short_answers": datasets.features.Sequence( { "start_token": datasets.Value("int64"), "end_token": datasets.Value("int64"), "start_byte": datasets.Value("int64"), "end_byte": datasets.Value("int64"), "text": datasets.Value("string"), } ), "yes_no_answer": datasets.features.ClassLabel( names=["NO", "YES"] ), # Can also be -1 for NONE. } ) ``` ### Data Splits | name | train | validation | |---------|-------:|-----------:| | default | 307373 | 7830 | | dev | N/A | 7830 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/). ### Citation Information ``` @article{47761, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {Transactions of the Association of Computational Linguistics} } ``` ### Contributions
rongzhangibm/NaturalQuestionsV2
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-07-06T12:50:46+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "natural-questions", "pretty_name": "Natural Questions"}
2022-07-07T04:22:20+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us
Dataset Card for Natural Questions ================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 42981 MB * Size of the generated dataset: 139706 MB * Total amount of disk used: 182687 MB ### Dataset Summary The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 42981 MB * Size of the generated dataset: 139706 MB * Total amount of disk used: 182687 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Creative Commons Attribution-ShareAlike 3.0 Unported. ### Contributions
[ "### Dataset Summary\n\n\nThe NQ corpus contains questions from real users, and it requires QA systems to\nread and comprehend an entire Wikipedia article that may or may not contain the\nanswer to the question. The inclusion of real user questions, and the\nrequirement that solutions should read an entire page to find the answer, cause\nNQ to be a more realistic and challenging task than prior QA datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 42981 MB\n* Size of the generated dataset: 139706 MB\n* Total amount of disk used: 182687 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution-ShareAlike 3.0 Unported.", "### Contributions" ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us \n", "### Dataset Summary\n\n\nThe NQ corpus contains questions from real users, and it requires QA systems to\nread and comprehend an entire Wikipedia article that may or may not contain the\nanswer to the question. The inclusion of real user questions, and the\nrequirement that solutions should read an entire page to find the answer, cause\nNQ to be a more realistic and challenging task than prior QA datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 42981 MB\n* Size of the generated dataset: 139706 MB\n* Total amount of disk used: 182687 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution-ShareAlike 3.0 Unported.", "### Contributions" ]
[ 97, 87, 10, 11, 6, 49, 17, 3, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 14, 5 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us \n### Dataset Summary\n\n\nThe NQ corpus contains questions from real users, and it requires QA systems to\nread and comprehend an entire Wikipedia article that may or may not contain the\nanswer to the question. The inclusion of real user questions, and the\nrequirement that solutions should read an entire page to find the answer, cause\nNQ to be a more realistic and challenging task than prior QA datasets.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 42981 MB\n* Size of the generated dataset: 139706 MB\n* Total amount of disk used: 182687 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information\n\n\nCreative Commons Attribution-ShareAlike 3.0 Unported.### Contributions" ]
2725c3ff834ab4c2fea3dbf681095bfc5126e47b
# Dataset Card for ogbg-molhiv ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-mol)** - **[Repository](https://github.com/snap-stanford/ogb):**: - **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) - **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv) ### Dataset Summary The `ogbg-molhiv` dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark. ### Supported Tasks and Leaderboards `ogbg-molhiv` should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC. The associated leaderboards are here: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv). ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader ogbg_molhiv = load_dataset("graphs-datasets/ogbg-molhiv") # For the train set (replace by valid or test as needed) ogbg_molhiv_pg_list = [Data(graph) for graph in ogbg_molhiv["train"]] ogbg_molhiv_pg = DataLoader(ogbg_molhiv_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | small | | #graphs | 41,127 | | average #nodes | 25.5 | | average #edges | 27.5 | | average node degree | 2.2 | | average cluster coefficient | 0.002 | | MaxSCC ratio | 0.993 | | graph diameter | 12.0 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using ```python from ogb.graphproppred import PygGraphPropPredDataset dataset = PygGraphPropPredDataset(name = 'ogbg-molhiv') split_idx = dataset.get_idx_split() train = dataset[split_idx['train']] # valid, test ``` ## Additional Information ### Licensing Information The dataset has been released under MIT license. ### Citation Information ``` @inproceedings{hu-etal-2020-open, author = {Weihua Hu and Matthias Fey and Marinka Zitnik and Yuxiao Dong and Hongyu Ren and Bowen Liu and Michele Catasta and Jure Leskovec}, editor = {Hugo Larochelle and Marc Aurelio Ranzato and Raia Hadsell and Maria{-}Florina Balcan and Hsuan{-}Tien Lin}, title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs}, booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, year = {2020}, url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html}, } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
OGB/ogbg-molhiv
[ "task_categories:graph-ml", "license:mit", "region:us" ]
2022-07-06T14:28:13+00:00
{"license": "mit", "task_categories": ["graph-ml"]}
2023-02-07T16:39:46+00:00
[]
[]
TAGS #task_categories-graph-ml #license-mit #region-us
Dataset Card for ogbg-molhiv ============================ Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards * External Use + PyGeometric * Dataset Structure + Data Properties + Data Fields + Data Splits * Additional Information + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage * Repository:: * Paper:: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) * Leaderboard:: OGB leaderboard and Papers with code leaderboard ### Dataset Summary The 'ogbg-molhiv' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark. ### Supported Tasks and Leaderboards 'ogbg-molhiv' should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC. The associated leaderboards are here: OGB leaderboard and Papers with code leaderboard. External Use ------------ ### PyGeometric To load in PyGeometric, do the following: Dataset Structure ----------------- ### Data Properties ### Data Fields Each row of a given file is a graph, with: * 'node\_feat' (list: #nodes x #node-features): nodes * 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges * 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features * 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) * 'num\_nodes' (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using Additional Information ---------------------- ### Licensing Information The dataset has been released under MIT license. ### Contributions Thanks to @clefourrier for adding this dataset.
[ "### Dataset Summary\n\n\nThe 'ogbg-molhiv' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.", "### Supported Tasks and Leaderboards\n\n\n'ogbg-molhiv' should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC.\n\n\nThe associated leaderboards are here: OGB leaderboard and Papers with code leaderboard.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under MIT license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ "TAGS\n#task_categories-graph-ml #license-mit #region-us \n", "### Dataset Summary\n\n\nThe 'ogbg-molhiv' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.", "### Supported Tasks and Leaderboards\n\n\n'ogbg-molhiv' should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC.\n\n\nThe associated leaderboards are here: OGB leaderboard and Papers with code leaderboard.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under MIT license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ 21, 51, 82, 25, 4, 158, 47, 16, 17 ]
[ "passage: TAGS\n#task_categories-graph-ml #license-mit #region-us \n### Dataset Summary\n\n\nThe 'ogbg-molhiv' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.### Supported Tasks and Leaderboards\n\n\n'ogbg-molhiv' should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC.\n\n\nThe associated leaderboards are here: OGB leaderboard and Papers with code leaderboard.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under MIT license.### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
f36dea879a9a16bccde6b06a5d180465ea2cdff9
# Dataset Card for HaGRID - HAnd Gesture Recognition Image Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/kapitanov/hagrid - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ![](https://github.com/hukenovs/hagrid/blob/master/images/hagrid.jpg?raw=true) We introduce a large image dataset **HaGRID** (**HA**nd **G**esture **R**ecognition **I**mage **D**ataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc. **HaGRID** size is **716GB** and dataset contains **552,992 FullHD** (1920 × 1080) RGB images divided into **18** classes of gestures. Also, some images have `no_gesture` class if there is a second free hand in the frame. This extra class contains **123,589** samples. The data were split into training **92%**, and testing **8%** sets by subject **user-id**, with **509,323** images for train and 43,669 images for test. ![](https://github.com/hukenovs/hagrid/raw/master/images/gestures.jpg) The dataset contains **34,730** unique persons and at least this number of unique scenes. The subjects are people from 18 to 65 years old. The dataset was collected mainly indoors with considerable variation in lighting, including artificial and natural light. Besides, the dataset includes images taken in extreme conditions such as facing and backing to a window. Also, the subjects had to show gestures at a distance of 0.5 to 4 meters from the camera. ## Annotations The annotations consist of bounding boxes of hands with gesture labels in COCO format `[top left X position, top left Y position, width, height]`. Also annotations have markups of `leading hands` (`left` of `right` for gesture hand) and `leading_conf` as confidence for `leading_hand` annotation. We provide `user_id` field that will allow you to split the train / val dataset yourself. ```json "03487280-224f-490d-8e36-6c5f48e3d7a0": { "bboxes": [ [0.0283366, 0.8686061, 0.0757000, 0.1149820], [0.6824319, 0.2661254, 0.1086447, 0.1481245] ], "labels": [ "no_gesture", "one" ], "leading_hand": "left", "leading_conf": 1.0, "user_id": "bb138d5db200f29385f..." } ``` ## Downloads We split the train dataset into 18 archives by gestures because of the large size of data. Download and unzip them from the following links: ### Trainval | Gesture | Size | Gesture | Size | |-----------------------------------|----------|-------------------------------------------|---------| | [`call`](https://sc.link/ykEn) | 39.1 GB | [`peace`](https://sc.link/l6nM) | 38.6 GB | | [`dislike`](https://sc.link/xjDB) | 38.7 GB | [`peace_inverted`](https://sc.link/mXoG) | 38.6 GB | | [`fist`](https://sc.link/wgB8) | 38.0 GB | [`rock`](https://sc.link/kMm6) | 38.9 GB | | [`four`](https://sc.link/vJA5) | 40.5 GB | [`stop`](https://sc.link/gXgk) | 38.3 GB | | [`like`](https://sc.link/r7wp) | 38.3 GB | [`stop_inverted`](https://sc.link/jJlv) | 40.2 GB | | [`mute`](https://sc.link/q8vp) | 39.5 GB | [`three`](https://sc.link/wgBr) | 39.4 GB | | [`ok`](https://sc.link/pV0V) | 39.0 GB | [`three2`](https://sc.link/vJA8) | 38.5 GB | | [`one`](https://sc.link/oJqX) | 39.9 GB | [`two_up`](https://sc.link/q8v7) | 41.2 GB | | [`palm`](https://sc.link/nJp7) | 39.3 GB | [`two_up_inverted`](https://sc.link/r7w2) | 39.2 GB | `train_val` **annotations**: [`ann_train_val`](https://sc.link/BE5Y) ### Test | Test | Archives | Size | |-------------|-------------------------------------|-----------| | images | [`test`](https://sc.link/zlGy) | 60.4 GB | | annotations | [`ann_test`](https://sc.link/DE5K) | 3.4 MB | ### Subsample Subsample has 100 items per gesture. | Subsample | Archives | Size | |-------------|-----------------------------------------|-----------| | images | [`subsample`](https://sc.link/AO5l) | 2.5 GB | | annotations | [`ann_subsample`](https://sc.link/EQ5g) | 153.8 KB | ## Models We provide some pre-trained classifiers and one detector as baselines. | Classifiers | F1 Gesture | F1 Leading hand | |-------------------------------------------|------------|-----------------| | [ResNet18](https://sc.link/KEnx) | 98.72 | 99.27 | | [ResNet152](https://sc.link/O9rr) | 99.11 | **99.45** | | [ResNeXt50](https://sc.link/GKjJ) | 98.99 | 99.39 | | [ResNeXt101](https://sc.link/JXmg) | **99.28** | 99.28 | | [MobileNetV3-small](https://sc.link/XVEg) | 96.78 | 98.28 | | [MobileNetV3-large](https://sc.link/YXG2) | 97.88 | 98.58 | | [VitB-32](https://sc.link/XV4g) | 98.49 | 99.13 | | Detector | mAP | |---------------------------------|-------| | [SSDLite](https://sc.link/YXg2) | 71.49 | ## Links - [Github](https://github.com/hukenovs/hagrid), [Mirror](https://gitlab.aicloud.sbercloud.ru/rndcv/hagrid) - [arXiv](https://arxiv.org/abs/2206.08219) - [Paperswithcode](https://paperswithcode.com/paper/hagrid-hand-gesture-recognition-image-dataset) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@kapitanov](https://kaggle.com/kapitanov) ### Licensing Information The license for this dataset is cc-by-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
abhishek/hagrid
[ "license:cc-by-sa-4.0", "arxiv:2206.08219", "region:us" ]
2022-07-06T15:14:11+00:00
{"license": ["cc-by-sa-4.0"], "kaggle_id": "kapitanov/hagrid"}
2022-10-25T09:39:46+00:00
[ "2206.08219" ]
[]
TAGS #license-cc-by-sa-4.0 #arxiv-2206.08219 #region-us
Dataset Card for HaGRID - HAnd Gesture Recognition Image Dataset ================================================================ Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary ![](URL We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc. HaGRID size is 716GB and dataset contains 552,992 FullHD (1920 × 1080) RGB images divided into 18 classes of gestures. Also, some images have 'no\_gesture' class if there is a second free hand in the frame. This extra class contains 123,589 samples. The data were split into training 92%, and testing 8% sets by subject user-id, with 509,323 images for train and 43,669 images for test. ![](URL The dataset contains 34,730 unique persons and at least this number of unique scenes. The subjects are people from 18 to 65 years old. The dataset was collected mainly indoors with considerable variation in lighting, including artificial and natural light. Besides, the dataset includes images taken in extreme conditions such as facing and backing to a window. Also, the subjects had to show gestures at a distance of 0.5 to 4 meters from the camera. Annotations ----------- The annotations consist of bounding boxes of hands with gesture labels in COCO format '[top left X position, top left Y position, width, height]'. Also annotations have markups of 'leading hands' ('left' of 'right' for gesture hand) and 'leading\_conf' as confidence for 'leading\_hand' annotation. We provide 'user\_id' field that will allow you to split the train / val dataset yourself. Downloads --------- We split the train dataset into 18 archives by gestures because of the large size of data. Download and unzip them from the following links: ### Trainval 'train\_val' annotations: 'ann\_train\_val' ### Test Test: images, Archives: 'test', Size: 60.4 GB Test: annotations, Archives: 'ann\_test', Size: 3.4 MB ### Subsample Subsample has 100 items per gesture. Subsample: images, Archives: 'subsample', Size: 2.5 GB Subsample: annotations, Archives: 'ann\_subsample', Size: 153.8 KB Models ------ We provide some pre-trained classifiers and one detector as baselines. Classifiers: ResNet18, F1 Gesture: 98.72, F1 Leading hand: 99.27 Classifiers: ResNet152, F1 Gesture: 99.11, F1 Leading hand: 99.45 Classifiers: ResNeXt50, F1 Gesture: 98.99, F1 Leading hand: 99.39 Classifiers: ResNeXt101, F1 Gesture: 99.28, F1 Leading hand: 99.28 Classifiers: MobileNetV3-small, F1 Gesture: 96.78, F1 Leading hand: 98.28 Classifiers: MobileNetV3-large, F1 Gesture: 97.88, F1 Leading hand: 98.58 Classifiers: VitB-32, F1 Gesture: 98.49, F1 Leading hand: 99.13 Links ----- * Github, Mirror * arXiv * Paperswithcode ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators This dataset was shared by @kapitanov ### Licensing Information The license for this dataset is cc-by-sa-4.0 ### Contributions
[ "### Dataset Summary\n\n\n![](URL\n\n\nWe introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.\n\n\nHaGRID size is 716GB and dataset contains 552,992 FullHD (1920 × 1080) RGB images divided into 18 classes of gestures. Also, some images have 'no\\_gesture' class if there is a second free hand in the frame. This extra class contains 123,589 samples. The data were split into training 92%, and testing 8% sets by subject user-id, with 509,323 images for train and 43,669 images for test.\n\n\n![](URL\n\n\nThe dataset contains 34,730 unique persons and at least this number of unique scenes. The subjects are people from 18 to 65 years old. The dataset was collected mainly indoors with considerable variation in lighting, including artificial and natural light. Besides, the dataset includes images taken in extreme conditions such as facing and backing to a window. Also, the subjects had to show gestures at a distance of 0.5 to 4 meters from the camera.\n\n\nAnnotations\n-----------\n\n\nThe annotations consist of bounding boxes of hands with gesture labels in COCO format '[top left X position, top left Y position, width, height]'. Also annotations have markups of 'leading hands' ('left' of 'right' for gesture hand) and 'leading\\_conf' as confidence for 'leading\\_hand' annotation. We provide 'user\\_id' field that will allow you to split the train / val dataset yourself.\n\n\nDownloads\n---------\n\n\nWe split the train dataset into 18 archives by gestures because of the large size of data. Download and unzip them from the following links:", "### Trainval\n\n\n\n'train\\_val' annotations: 'ann\\_train\\_val'", "### Test\n\n\nTest: images, Archives: 'test', Size: 60.4 GB\nTest: annotations, Archives: 'ann\\_test', Size: 3.4 MB", "### Subsample\n\n\nSubsample has 100 items per gesture.\n\n\nSubsample: images, Archives: 'subsample', Size: 2.5 GB\nSubsample: annotations, Archives: 'ann\\_subsample', Size: 153.8 KB\n\n\nModels\n------\n\n\nWe provide some pre-trained classifiers and one detector as baselines.\n\n\nClassifiers: ResNet18, F1 Gesture: 98.72, F1 Leading hand: 99.27\nClassifiers: ResNet152, F1 Gesture: 99.11, F1 Leading hand: 99.45\nClassifiers: ResNeXt50, F1 Gesture: 98.99, F1 Leading hand: 99.39\nClassifiers: ResNeXt101, F1 Gesture: 99.28, F1 Leading hand: 99.28\nClassifiers: MobileNetV3-small, F1 Gesture: 96.78, F1 Leading hand: 98.28\nClassifiers: MobileNetV3-large, F1 Gesture: 97.88, F1 Leading hand: 98.58\nClassifiers: VitB-32, F1 Gesture: 98.49, F1 Leading hand: 99.13\n\n\n\nLinks\n-----\n\n\n* Github, Mirror\n* arXiv\n* Paperswithcode", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was shared by @kapitanov", "### Licensing Information\n\n\nThe license for this dataset is cc-by-sa-4.0", "### Contributions" ]
[ "TAGS\n#license-cc-by-sa-4.0 #arxiv-2206.08219 #region-us \n", "### Dataset Summary\n\n\n![](URL\n\n\nWe introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.\n\n\nHaGRID size is 716GB and dataset contains 552,992 FullHD (1920 × 1080) RGB images divided into 18 classes of gestures. Also, some images have 'no\\_gesture' class if there is a second free hand in the frame. This extra class contains 123,589 samples. The data were split into training 92%, and testing 8% sets by subject user-id, with 509,323 images for train and 43,669 images for test.\n\n\n![](URL\n\n\nThe dataset contains 34,730 unique persons and at least this number of unique scenes. The subjects are people from 18 to 65 years old. The dataset was collected mainly indoors with considerable variation in lighting, including artificial and natural light. Besides, the dataset includes images taken in extreme conditions such as facing and backing to a window. Also, the subjects had to show gestures at a distance of 0.5 to 4 meters from the camera.\n\n\nAnnotations\n-----------\n\n\nThe annotations consist of bounding boxes of hands with gesture labels in COCO format '[top left X position, top left Y position, width, height]'. Also annotations have markups of 'leading hands' ('left' of 'right' for gesture hand) and 'leading\\_conf' as confidence for 'leading\\_hand' annotation. We provide 'user\\_id' field that will allow you to split the train / val dataset yourself.\n\n\nDownloads\n---------\n\n\nWe split the train dataset into 18 archives by gestures because of the large size of data. Download and unzip them from the following links:", "### Trainval\n\n\n\n'train\\_val' annotations: 'ann\\_train\\_val'", "### Test\n\n\nTest: images, Archives: 'test', Size: 60.4 GB\nTest: annotations, Archives: 'ann\\_test', Size: 3.4 MB", "### Subsample\n\n\nSubsample has 100 items per gesture.\n\n\nSubsample: images, Archives: 'subsample', Size: 2.5 GB\nSubsample: annotations, Archives: 'ann\\_subsample', Size: 153.8 KB\n\n\nModels\n------\n\n\nWe provide some pre-trained classifiers and one detector as baselines.\n\n\nClassifiers: ResNet18, F1 Gesture: 98.72, F1 Leading hand: 99.27\nClassifiers: ResNet152, F1 Gesture: 99.11, F1 Leading hand: 99.45\nClassifiers: ResNeXt50, F1 Gesture: 98.99, F1 Leading hand: 99.39\nClassifiers: ResNeXt101, F1 Gesture: 99.28, F1 Leading hand: 99.28\nClassifiers: MobileNetV3-small, F1 Gesture: 96.78, F1 Leading hand: 98.28\nClassifiers: MobileNetV3-large, F1 Gesture: 97.88, F1 Leading hand: 98.58\nClassifiers: VitB-32, F1 Gesture: 98.49, F1 Leading hand: 99.13\n\n\n\nLinks\n-----\n\n\n* Github, Mirror\n* arXiv\n* Paperswithcode", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was shared by @kapitanov", "### Licensing Information\n\n\nThe license for this dataset is cc-by-sa-4.0", "### Contributions" ]
[ 26, 472, 25, 37, 289, 10, 11, 6, 5, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 16, 21, 5 ]
[ "passage: TAGS\n#license-cc-by-sa-4.0 #arxiv-2206.08219 #region-us \n### Dataset Summary\n\n\n![](URL\n\n\nWe introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.\n\n\nHaGRID size is 716GB and dataset contains 552,992 FullHD (1920 × 1080) RGB images divided into 18 classes of gestures. Also, some images have 'no\\_gesture' class if there is a second free hand in the frame. This extra class contains 123,589 samples. The data were split into training 92%, and testing 8% sets by subject user-id, with 509,323 images for train and 43,669 images for test.\n\n\n![](URL\n\n\nThe dataset contains 34,730 unique persons and at least this number of unique scenes. The subjects are people from 18 to 65 years old. The dataset was collected mainly indoors with considerable variation in lighting, including artificial and natural light. Besides, the dataset includes images taken in extreme conditions such as facing and backing to a window. Also, the subjects had to show gestures at a distance of 0.5 to 4 meters from the camera.\n\n\nAnnotations\n-----------\n\n\nThe annotations consist of bounding boxes of hands with gesture labels in COCO format '[top left X position, top left Y position, width, height]'. Also annotations have markups of 'leading hands' ('left' of 'right' for gesture hand) and 'leading\\_conf' as confidence for 'leading\\_hand' annotation. We provide 'user\\_id' field that will allow you to split the train / val dataset yourself.\n\n\nDownloads\n---------\n\n\nWe split the train dataset into 18 archives by gestures because of the large size of data. Download and unzip them from the following links:", "passage: ### Trainval\n\n\n\n'train\\_val' annotations: 'ann\\_train\\_val'### Test\n\n\nTest: images, Archives: 'test', Size: 60.4 GB\nTest: annotations, Archives: 'ann\\_test', Size: 3.4 MB### Subsample\n\n\nSubsample has 100 items per gesture.\n\n\nSubsample: images, Archives: 'subsample', Size: 2.5 GB\nSubsample: annotations, Archives: 'ann\\_subsample', Size: 153.8 KB\n\n\nModels\n------\n\n\nWe provide some pre-trained classifiers and one detector as baselines.\n\n\nClassifiers: ResNet18, F1 Gesture: 98.72, F1 Leading hand: 99.27\nClassifiers: ResNet152, F1 Gesture: 99.11, F1 Leading hand: 99.45\nClassifiers: ResNeXt50, F1 Gesture: 98.99, F1 Leading hand: 99.39\nClassifiers: ResNeXt101, F1 Gesture: 99.28, F1 Leading hand: 99.28\nClassifiers: MobileNetV3-small, F1 Gesture: 96.78, F1 Leading hand: 98.28\nClassifiers: MobileNetV3-large, F1 Gesture: 97.88, F1 Leading hand: 98.58\nClassifiers: VitB-32, F1 Gesture: 98.49, F1 Leading hand: 99.13\n\n\n\nLinks\n-----\n\n\n* Github, Mirror\n* arXiv\n* Paperswithcode### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields### Data Splits\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nThis dataset was shared by @kapitanov### Licensing Information\n\n\nThe license for this dataset is cc-by-sa-4.0" ]
e31ecab901244db4e739433cd66758de1ba6aa58
# Dataset Card for ogbg-molpcba ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-mol) - **Repository:** [Repo](https://github.com/snap-stanford/ogb) - **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs - **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molpcba) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molpcba) ### Dataset Summary The `ogbg-molpcba` dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark. ### Supported Tasks and Leaderboards `ogbg-molpcba` should be used for molecular property prediction (with 128 properties to predict, not all present for all graphs), a binary classification task. The score used is Average Precision (AP) averaged over the tasks. The associated leaderboards are here: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molpcba) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molpcba). ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset = load_dataset("graphs-datasets/ogbg-molpcba") # For the train set (replace by valid or test as needed) graphs_list_pygeometric = [Data(graph) for graph in dataset["train"]] dataset_pygeometric = DataLoader(graphs_list_pygeometric) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | medium | | #graphs | 437,929 | | average #nodes | 26.0 | | average #edges | 28.1 | | average node degree | 2.2 | | average cluster coefficient | 0.002 | | MaxSCC ratio | 0.999 | | graph diameter | 13.6 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 128 labels, equal to zero, one, or Nan if the property is not relevant for the graph) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using ```python from ogb.graphproppred import PygGraphPropPredDataset dataset = PygGraphPropPredDataset(name = 'ogbg-molpcba') split_idx = dataset.get_idx_split() train = dataset[split_idx['train']] # valid, test ``` ## Additional Information ### Licensing Information The dataset has been released under MIT license. ### Citation Information ``` @inproceedings{hu-etal-2020-open, author = {Weihua Hu and Matthias Fey and Marinka Zitnik and Yuxiao Dong and Hongyu Ren and Bowen Liu and Michele Catasta and Jure Leskovec}, editor = {Hugo Larochelle and Marc Aurelio Ranzato and Raia Hadsell and Maria{-}Florina Balcan and Hsuan{-}Tien Lin}, title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs}, booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, year = {2020}, url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html}, } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
OGB/ogbg-molpcba
[ "task_categories:graph-ml", "license:mit", "region:us" ]
2022-07-06T15:30:22+00:00
{"license": "mit", "task_categories": ["graph-ml"]}
2023-02-07T16:39:54+00:00
[]
[]
TAGS #task_categories-graph-ml #license-mit #region-us
Dataset Card for ogbg-molpcba ============================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards * External Use + PyGeometric * Dataset Structure + Data Properties + Data Fields + Data Splits * Additional Information + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Homepage * Repository: Repo * Paper:: Open Graph Benchmark: Datasets for Machine Learning on Graphs * Leaderboard:: OGB leaderboard and Papers with code leaderboard ### Dataset Summary The 'ogbg-molpcba' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark. ### Supported Tasks and Leaderboards 'ogbg-molpcba' should be used for molecular property prediction (with 128 properties to predict, not all present for all graphs), a binary classification task. The score used is Average Precision (AP) averaged over the tasks. The associated leaderboards are here: OGB leaderboard and Papers with code leaderboard. External Use ------------ ### PyGeometric To load in PyGeometric, do the following: Dataset Structure ----------------- ### Data Properties ### Data Fields Each row of a given file is a graph, with: * 'node\_feat' (list: #nodes x #node-features): nodes * 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges * 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features * 'y' (list: 1 x #labels): contains the number of labels available to predict (here 128 labels, equal to zero, one, or Nan if the property is not relevant for the graph) * 'num\_nodes' (int): number of nodes of the graph ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using Additional Information ---------------------- ### Licensing Information The dataset has been released under MIT license. ### Contributions Thanks to @clefourrier for adding this dataset.
[ "### Dataset Summary\n\n\nThe 'ogbg-molpcba' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.", "### Supported Tasks and Leaderboards\n\n\n'ogbg-molpcba' should be used for molecular property prediction (with 128 properties to predict, not all present for all graphs), a binary classification task.\nThe score used is Average Precision (AP) averaged over the tasks.\n\n\nThe associated leaderboards are here: OGB leaderboard and Papers with code leaderboard.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 128 labels, equal to zero, one, or Nan if the property is not relevant for the graph)\n* 'num\\_nodes' (int): number of nodes of the graph", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under MIT license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ "TAGS\n#task_categories-graph-ml #license-mit #region-us \n", "### Dataset Summary\n\n\nThe 'ogbg-molpcba' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.", "### Supported Tasks and Leaderboards\n\n\n'ogbg-molpcba' should be used for molecular property prediction (with 128 properties to predict, not all present for all graphs), a binary classification task.\nThe score used is Average Precision (AP) averaged over the tasks.\n\n\nThe associated leaderboards are here: OGB leaderboard and Papers with code leaderboard.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 128 labels, equal to zero, one, or Nan if the property is not relevant for the graph)\n* 'num\\_nodes' (int): number of nodes of the graph", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under MIT license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ 21, 51, 92, 25, 4, 173, 47, 16, 17 ]
[ "passage: TAGS\n#task_categories-graph-ml #license-mit #region-us \n### Dataset Summary\n\n\nThe 'ogbg-molpcba' dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.### Supported Tasks and Leaderboards\n\n\n'ogbg-molpcba' should be used for molecular property prediction (with 128 properties to predict, not all present for all graphs), a binary classification task.\nThe score used is Average Precision (AP) averaged over the tasks.\n\n\nThe associated leaderboards are here: OGB leaderboard and Papers with code leaderboard.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 128 labels, equal to zero, one, or Nan if the property is not relevant for the graph)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under MIT license.### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
799fbbcc628fb95e188c3201f93a26c59b851a3e
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_5k
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-06T17:51:40+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-5k"}
2022-08-04T18:36:03+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-5k\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-5k\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 27, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-5k\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
26966e8507ac28196c5ede4c9733318409e8d45d
This study compared COVID-19 vaccine-related material from two popular social media platforms, Reddit and Twitter from January 1, 2020, to March 1, 2022. These two platforms were chosen due to their worldwide usage, vibrant discussions, and high user count. The timeframe included the earliest parts of the pandemic to trace the evolution of sentiments over time. Most importantly, these platforms were chosen because only a small number of comparative studies focused on the typical user, especially related to COVID-19 vaccine sentiment or other vaccine content. Significant effort was taken to identify and remove Twitter postings that were found to be directly from news agencies or bots. These posts were identified due to an overwhelmingly high post count during the 26-months period relative to the average posting of a “normal” user as well as by visually inspecting tweets of users that appeared at an abnormal frequency. Both Twitter and Reddit datasets were limited to only include users who posted less than or equal to 200 times throughout our timeframe. These steps were important due to the repetitive nature of many bot tweets which had the potential to skew sentiment calculations and misalign the goal to compare the normal user base of both platforms. Though methodologies in harvesting Reddit and Twitter data differ slightly, both datasets underwent similar cleaning steps. Both were queried for the same relevant terms typically present in online discussions about COVID-19 vaccines. This step was important due to the tendency for some extended comment threads to meander off-topic. This occurrence was especially true with threads from some Reddit communities. The posting frequency of the two platforms was relatively similar in the early months of the pandemic. Frequency increased dramatically for both platforms in late September 2020-October 2020 as news of the vaccine roll became more widespread. Though each platform displays four spikes in posting frequency a similar times (Oct 2020, Mar-Apr 2021, Aug-Sep 2021, Dec 2021-Jan 2022) they each obtain a maximum in different months. Reddit reached its maximum posting in Mar-April 2021 while Twitter reached its maximum in Sep-Oct 2021. Twitter Data Approximately 13 million Tweets were harvested using the snscrape (document) and Tweepy API Python libraries (document). After removing tweets by suspected bots, news media, highly repetitive-high frequency users, or duplicate Tweets, our final Twitter data set consisted of 9,518,270 Tweets authored by 3,006,075 Twitter users. The Tweets contained a total of approximately 16.32 million likes with a maximum of 430,758 and an average of 14.9. Tweets cannot be downvoted but approximately 4,794,865 had zero likes attributed. Statistics on tweet sharing or retweets were not collected because this metric was not available for both platforms. I also collected vaccine-related data from the Reddit information-sharing social media platform that is currently accessed by approximately 430 million users with approximately fifty percent located in the U.S. The platform is composed of user-created communities (subreddits), in which members adhere to a set of community regulations. Subreddit members have the option to post links, images, videos, and text. Community members then typically "upvote" or "downvote" on a post based on their opinion of the quality of that post and/or leave comments. Depending on the distribution of votes, posts are classified as hot, new, rising, and controversial. The most popular posts within each category are then moved to the top of the community page. These comments are subjected to the same vote ranking system. The upvote/downvote system within Reddit is intended to increase the quality of the posts to minimize non-relevant material. Reddit data were gathered in three phases of this study and therefore there are three Reddit (R1, R1.2, and R2). For the R1 data set, I harvested approximately 18,000 posts from thirteen subreddits through the Reddit API on May 16, 2021. Because Reddit communities potentially contain some inherent bias due to strict community rules, as well as content monitoring by a moderator, these subreddits were chosen to create a non-biased dataset from a diverse selection of communities that vary widely in political views as well as position on vaccination. These subreddits were also chosen due to a large number of members (approximately five million members). Data were cleaned first by combining each subreddit into a centralized database. The data were then organized by date and then queried for terms specifically related to the COVID-19 vaccine. These terms included COVID vaccine, vaccine, vaccination, immune, immunity, COVID vaccination, corona vaccine, COVID19 vaccination, COVID-19 vaccination, coronavirus vaccination, coronavirus vaccine, COVID-19 vaccine, coronavirus vaccine, coronavirus vaccination, Moderna, Pfizer, J&J, Johnson & Johnson, COVID vax, corona vax, covid-19 vax, covid19 vax, coronavirus vax). Our finalized dataset consisted of 1401 posts and 10,240 comments (11,641 in total) written by greater than or equal to 8281 authors/users, 1048 of whom posted multiple times. In actuality, the number of authors could have been as high as 9013. These additional users are probable because Reddit removes the user ID from posts after a user deletes their account. However, the post content and upvotes remain.
NoCaptain/Reddit_Twitter_C19_Jan2020_Feb2022
[ "region:us" ]
2022-07-07T01:16:31+00:00
{}
2022-07-07T01:34:04+00:00
[]
[]
TAGS #region-us
This study compared COVID-19 vaccine-related material from two popular social media platforms, Reddit and Twitter from January 1, 2020, to March 1, 2022. These two platforms were chosen due to their worldwide usage, vibrant discussions, and high user count. The timeframe included the earliest parts of the pandemic to trace the evolution of sentiments over time. Most importantly, these platforms were chosen because only a small number of comparative studies focused on the typical user, especially related to COVID-19 vaccine sentiment or other vaccine content. Significant effort was taken to identify and remove Twitter postings that were found to be directly from news agencies or bots. These posts were identified due to an overwhelmingly high post count during the 26-months period relative to the average posting of a “normal” user as well as by visually inspecting tweets of users that appeared at an abnormal frequency. Both Twitter and Reddit datasets were limited to only include users who posted less than or equal to 200 times throughout our timeframe. These steps were important due to the repetitive nature of many bot tweets which had the potential to skew sentiment calculations and misalign the goal to compare the normal user base of both platforms. Though methodologies in harvesting Reddit and Twitter data differ slightly, both datasets underwent similar cleaning steps. Both were queried for the same relevant terms typically present in online discussions about COVID-19 vaccines. This step was important due to the tendency for some extended comment threads to meander off-topic. This occurrence was especially true with threads from some Reddit communities. The posting frequency of the two platforms was relatively similar in the early months of the pandemic. Frequency increased dramatically for both platforms in late September 2020-October 2020 as news of the vaccine roll became more widespread. Though each platform displays four spikes in posting frequency a similar times (Oct 2020, Mar-Apr 2021, Aug-Sep 2021, Dec 2021-Jan 2022) they each obtain a maximum in different months. Reddit reached its maximum posting in Mar-April 2021 while Twitter reached its maximum in Sep-Oct 2021. Twitter Data Approximately 13 million Tweets were harvested using the snscrape (document) and Tweepy API Python libraries (document). After removing tweets by suspected bots, news media, highly repetitive-high frequency users, or duplicate Tweets, our final Twitter data set consisted of 9,518,270 Tweets authored by 3,006,075 Twitter users. The Tweets contained a total of approximately 16.32 million likes with a maximum of 430,758 and an average of 14.9. Tweets cannot be downvoted but approximately 4,794,865 had zero likes attributed. Statistics on tweet sharing or retweets were not collected because this metric was not available for both platforms. I also collected vaccine-related data from the Reddit information-sharing social media platform that is currently accessed by approximately 430 million users with approximately fifty percent located in the U.S. The platform is composed of user-created communities (subreddits), in which members adhere to a set of community regulations. Subreddit members have the option to post links, images, videos, and text. Community members then typically "upvote" or "downvote" on a post based on their opinion of the quality of that post and/or leave comments. Depending on the distribution of votes, posts are classified as hot, new, rising, and controversial. The most popular posts within each category are then moved to the top of the community page. These comments are subjected to the same vote ranking system. The upvote/downvote system within Reddit is intended to increase the quality of the posts to minimize non-relevant material. Reddit data were gathered in three phases of this study and therefore there are three Reddit (R1, R1.2, and R2). For the R1 data set, I harvested approximately 18,000 posts from thirteen subreddits through the Reddit API on May 16, 2021. Because Reddit communities potentially contain some inherent bias due to strict community rules, as well as content monitoring by a moderator, these subreddits were chosen to create a non-biased dataset from a diverse selection of communities that vary widely in political views as well as position on vaccination. These subreddits were also chosen due to a large number of members (approximately five million members). Data were cleaned first by combining each subreddit into a centralized database. The data were then organized by date and then queried for terms specifically related to the COVID-19 vaccine. These terms included COVID vaccine, vaccine, vaccination, immune, immunity, COVID vaccination, corona vaccine, COVID19 vaccination, COVID-19 vaccination, coronavirus vaccination, coronavirus vaccine, COVID-19 vaccine, coronavirus vaccine, coronavirus vaccination, Moderna, Pfizer, J&J, Johnson & Johnson, COVID vax, corona vax, covid-19 vax, covid19 vax, coronavirus vax). Our finalized dataset consisted of 1401 posts and 10,240 comments (11,641 in total) written by greater than or equal to 8281 authors/users, 1048 of whom posted multiple times. In actuality, the number of authors could have been as high as 9013. These additional users are probable because Reddit removes the user ID from posts after a user deletes their account. However, the post content and upvotes remain.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
5ca2d71dccc8ad088e9367de4a77810558cb96aa
# AutoTrain Dataset for project: ZuoZhuan ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project ZuoZhuan. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "tokens": [ "\u4e09", "\u5de1", "\u6578", "\u4e4b", "\u3002" ], "tags": [ 6, 23, 23, 15, 24 ] }, { "tokens": [ "\u9042", "\u6b78", "\uff0c", "\u5fa9", "\u547d", "\uff0c", "\u800c", "\u81ea", "\u62d8", "\u65bc", "\u53f8", "\u6557", "\u3002" ], "tags": [ 3, 23, 24, 23, 8, 24, 2, 15, 23, 13, 8, 8, 24 ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "tags": "Sequence(feature=ClassLabel(num_classes=28, names=['/a', '/b', '/c', '/d', '/f', '/j', '/m', '/mr', '/n', '/nn', '/nr', '/ns', '/nsr', '/p', '/q', '/r', '/rn', '/rr', '/rs', '/s', '/sv', '/t', '/u', '/v', '/w', '/wv', '/y', '/yv'], id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 5836 | | valid | 2860 |
ScarlettSun9/autotrain-data-ZuoZhuan
[ "region:us" ]
2022-07-07T05:11:55+00:00
{}
2022-07-07T06:02:10+00:00
[]
[]
TAGS #region-us
AutoTrain Dataset for project: ZuoZhuan ======================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project ZuoZhuan. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 6, 27, 17, 23, 27 ]
[ "passage: TAGS\n#region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
5da9a53f82067feee7cb06199d9bd3195909eb96
# K-12Corpus
vasugoel/K-12Corpus
[ "region:us" ]
2022-07-07T06:14:59+00:00
{}
2022-07-07T06:22:49+00:00
[]
[]
TAGS #region-us
# K-12Corpus
[ "# K-12Corpus" ]
[ "TAGS\n#region-us \n", "# K-12Corpus" ]
[ 6, 5 ]
[ "passage: TAGS\n#region-us \n# K-12Corpus" ]
02e48d0e72eda20d6c9d1aaac236c5bd8a4e5d3a
Please check here to see when the dataset was last updated. <br /> <h1> Last Updated July 12th, 2022 </h1>
CShorten/Last-Week-on-ML-ArXiv
[ "region:us" ]
2022-07-07T11:01:47+00:00
{}
2022-07-12T20:03:47+00:00
[]
[]
TAGS #region-us
Please check here to see when the dataset was last updated. <br /> <h1> Last Updated July 12th, 2022 </h1>
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
f1139695b37475f1df787a7fdaf629211ee187b7
# Dataset Card for ogbg-ppa ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-ppa)** - **[Repository](https://github.com/snap-stanford/ogb):**: - **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) - **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-ppa) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-ppa) ### Dataset Summary The `ogbg-ppa` dataset is "a set of undirected protein association neighborhoods extracted from the protein-protein association networks of 1,581 species", over 37 taxonomic groups, by teams at Stanford, to be a part of the Open Graph Benchmark. See their website for dataset postprocessing. ### Supported Tasks and Leaderboards `ogbg-ppa` should be used for taxonomic group prediction, a 37-way multi-class classification task. The score used is Average Precision on the test set. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader graphs_dataset = load_dataset("graphs-datasets/ogbg-ppa") # For the train set (replace by valid or test as needed) graphs_list = [Data(graph) for graph in graphs_dataset["train"]] graphs_pygeometric = DataLoader(graph_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | small | | #graphs | 158,100 | | average #nodes | 243.4 | | average #edges | 2,266.1 | | average node degree | 18.3 | | average cluster coefficient | 0.513 | | MaxSCC ratio | 1.000 | | graph diameter | 4.8 | ### Data Fields Each row of a given file is a graph, with: - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph The nodes don't have specific features and are implicit from the lists of edges ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using ```python from ogb.graphproppred import PygGraphPropPredDataset dataset = PygGraphPropPredDataset(name = 'ogbg-ppa') split_idx = dataset.get_idx_split() train = dataset[split_idx['train']] # valid, test ``` ## Additional Information ### Licensing Information The dataset has been released under CC-0 license. ### Citation Information ``` @inproceedings{hu-etal-2020-open, author = {Weihua Hu and Matthias Fey and Marinka Zitnik and Yuxiao Dong and Hongyu Ren and Bowen Liu and Michele Catasta and Jure Leskovec}, editor = {Hugo Larochelle and Marc Aurelio Ranzato and Raia Hadsell and Maria{-}Florina Balcan and Hsuan{-}Tien Lin}, title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs}, booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, year = {2020}, url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html}, } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
OGB/ogbg-ppa
[ "task_categories:graph-ml", "license:cc0-1.0", "region:us" ]
2022-07-07T11:35:18+00:00
{"license": "cc0-1.0", "task_categories": ["graph-ml"]}
2023-02-07T16:40:13+00:00
[]
[]
TAGS #task_categories-graph-ml #license-cc0-1.0 #region-us
Dataset Card for ogbg-ppa ========================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards * External Use + PyGeometric * Dataset Structure + Data Properties + Data Fields + Data Splits * Additional Information + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage * Repository:: * Paper:: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) * Leaderboard:: OGB leaderboard and Papers with code leaderboard ### Dataset Summary The 'ogbg-ppa' dataset is "a set of undirected protein association neighborhoods extracted from the protein-protein association networks of 1,581 species", over 37 taxonomic groups, by teams at Stanford, to be a part of the Open Graph Benchmark. See their website for dataset postprocessing. ### Supported Tasks and Leaderboards 'ogbg-ppa' should be used for taxonomic group prediction, a 37-way multi-class classification task. The score used is Average Precision on the test set. External Use ------------ ### PyGeometric To load in PyGeometric, do the following: Dataset Structure ----------------- ### Data Properties ### Data Fields Each row of a given file is a graph, with: * 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges * 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features * 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) * 'num\_nodes' (int): number of nodes of the graph The nodes don't have specific features and are implicit from the lists of edges ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using Additional Information ---------------------- ### Licensing Information The dataset has been released under CC-0 license. ### Contributions Thanks to @clefourrier for adding this dataset.
[ "### Dataset Summary\n\n\nThe 'ogbg-ppa' dataset is \"a set of undirected protein association neighborhoods extracted from the protein-protein association networks of 1,581 species\", over 37 taxonomic groups, by teams at Stanford, to be a part of the Open Graph Benchmark. See their website for dataset postprocessing.", "### Supported Tasks and Leaderboards\n\n\n'ogbg-ppa' should be used for taxonomic group prediction, a 37-way multi-class classification task. The score used is Average Precision on the test set.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph\n\n\nThe nodes don't have specific features and are implicit from the lists of edges", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under CC-0 license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ "TAGS\n#task_categories-graph-ml #license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nThe 'ogbg-ppa' dataset is \"a set of undirected protein association neighborhoods extracted from the protein-protein association networks of 1,581 species\", over 37 taxonomic groups, by teams at Stanford, to be a part of the Open Graph Benchmark. See their website for dataset postprocessing.", "### Supported Tasks and Leaderboards\n\n\n'ogbg-ppa' should be used for taxonomic group prediction, a 37-way multi-class classification task. The score used is Average Precision on the test set.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph\n\n\nThe nodes don't have specific features and are implicit from the lists of edges", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under CC-0 license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ 24, 78, 56, 25, 4, 153, 47, 17, 17 ]
[ "passage: TAGS\n#task_categories-graph-ml #license-cc0-1.0 #region-us \n### Dataset Summary\n\n\nThe 'ogbg-ppa' dataset is \"a set of undirected protein association neighborhoods extracted from the protein-protein association networks of 1,581 species\", over 37 taxonomic groups, by teams at Stanford, to be a part of the Open Graph Benchmark. See their website for dataset postprocessing.### Supported Tasks and Leaderboards\n\n\n'ogbg-ppa' should be used for taxonomic group prediction, a 37-way multi-class classification task. The score used is Average Precision on the test set.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph\n\n\nThe nodes don't have specific features and are implicit from the lists of edges### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under CC-0 license.### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
25ce4c5cbbdba141bcefdee713618838b47a7d5f
# Dataset Card for ogbg-code2 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-code2)** - **[Repository](https://github.com/snap-stanford/ogb):**: - **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) - **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-code2) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-code2) ### Dataset Summary The `ogbg-code2` dataset contains Abstract Syntax Trees (ASTs) obtained from 450 thousands Python method definitions, from GitHub CodeSearchNet. "Methods are extracted from a total of 13,587 different repositories across the most popular projects on GitHub.", by teams at Stanford, to be a part of the Open Graph Benchmark. See their website or paper for dataset postprocessing. ### Supported Tasks and Leaderboards "The task is to predict the sub-tokens forming the method name, given the Python method body represented by AST and its node features. This task is often referred to as “code summarization”, because the model is trained to find succinct and precise description for a complete logical unit." The score is the F1 score of sub-token prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader graphs_dataset = load_dataset("graphs-datasets/ogbg-code2) # For the train set (replace by valid or test as needed) graphs_list = [Data(graph) for graph in graphs_dataset["train"]] graphs_pygeometric = DataLoader(graph_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | medium | | #graphs | 452,741 | | average #nodes | 125.2 | | average #edges | 124.2 | | average node degree | 2.0 | | average cluster coefficient | 0.0 | | MaxSCC ratio | 1.000 | | graph diameter | 13.5 | ### Data Fields Each row of a given file is a graph, with: - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_feat` (list: #edges x #edge-features): features of edges - `node_feat` (list: #nodes x #node-features): the nodes features, embedded - `node_feat_expanded` (list: #nodes x #node-features): the nodes features, as code - `node_is_attributed` (list: 1 x #nodes): ? - `node_dfs_order` (list: #nodes x #1): the nodes order in the abstract tree, if parsed using a depth first search - `node_depth` (list: #nodes x #1): the nodes depth in the abstract tree - `y` (list: 1 x #tokens): contains the tokens to predict as method name - `num_nodes` (int): number of nodes of the graph - `ptr` (list: 2): index of first and last node of the graph - `batch` (list: 1 x #nodes): ? ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using ```python from ogb.graphproppred import PygGraphPropPredDataset dataset = PygGraphPropPredDataset(name = 'ogbg-code2') split_idx = dataset.get_idx_split() train = dataset[split_idx['train']] # valid, test ``` More information (`node_feat_expanded`) has been added through the typeidx2type and attridx2attr csv files of the repo. ## Additional Information ### Licensing Information The dataset has been released under MIT license license. ### Citation Information ``` @inproceedings{hu-etal-2020-open, author = {Weihua Hu and Matthias Fey and Marinka Zitnik and Yuxiao Dong and Hongyu Ren and Bowen Liu and Michele Catasta and Jure Leskovec}, editor = {Hugo Larochelle and Marc Aurelio Ranzato and Raia Hadsell and Maria{-}Florina Balcan and Hsuan{-}Tien Lin}, title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs}, booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual}, year = {2020}, url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html}, } ``` ### Contributions Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset.
OGB/ogbg-code2
[ "task_categories:graph-ml", "license:mit", "region:us" ]
2022-07-07T12:51:15+00:00
{"license": "mit", "task_categories": ["graph-ml"]}
2023-02-07T16:40:02+00:00
[]
[]
TAGS #task_categories-graph-ml #license-mit #region-us
Dataset Card for ogbg-code2 =========================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards * External Use + PyGeometric * Dataset Structure + Data Properties + Data Fields + Data Splits * Additional Information + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage * Repository:: * Paper:: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation) * Leaderboard:: OGB leaderboard and Papers with code leaderboard ### Dataset Summary The 'ogbg-code2' dataset contains Abstract Syntax Trees (ASTs) obtained from 450 thousands Python method definitions, from GitHub CodeSearchNet. "Methods are extracted from a total of 13,587 different repositories across the most popular projects on GitHub.", by teams at Stanford, to be a part of the Open Graph Benchmark. See their website or paper for dataset postprocessing. ### Supported Tasks and Leaderboards "The task is to predict the sub-tokens forming the method name, given the Python method body represented by AST and its node features. This task is often referred to as “code summarization”, because the model is trained to find succinct and precise description for a complete logical unit." The score is the F1 score of sub-token prediction. External Use ------------ ### PyGeometric To load in PyGeometric, do the following: Dataset Structure ----------------- ### Data Properties ### Data Fields Each row of a given file is a graph, with: * 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges * 'edge\_feat' (list: #edges x #edge-features): features of edges * 'node\_feat' (list: #nodes x #node-features): the nodes features, embedded * 'node\_feat\_expanded' (list: #nodes x #node-features): the nodes features, as code * 'node\_is\_attributed' (list: 1 x #nodes): ? * 'node\_dfs\_order' (list: #nodes x #1): the nodes order in the abstract tree, if parsed using a depth first search * 'node\_depth' (list: #nodes x #1): the nodes depth in the abstract tree * 'y' (list: 1 x #tokens): contains the tokens to predict as method name * 'num\_nodes' (int): number of nodes of the graph * 'ptr' (list: 2): index of first and last node of the graph * 'batch' (list: 1 x #nodes): ? ### Data Splits This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits. This information can be found back using More information ('node\_feat\_expanded') has been added through the typeidx2type and attridx2attr csv files of the repo. Additional Information ---------------------- ### Licensing Information The dataset has been released under MIT license license. ### Contributions Thanks to @clefourrier for adding this dataset.
[ "### Dataset Summary\n\n\nThe 'ogbg-code2' dataset contains Abstract Syntax Trees (ASTs) obtained from 450 thousands Python method definitions, from GitHub CodeSearchNet. \"Methods are extracted from a total of 13,587 different repositories across the most popular projects on GitHub.\", by teams at Stanford, to be a part of the Open Graph Benchmark. See their website or paper for dataset postprocessing.", "### Supported Tasks and Leaderboards\n\n\n\"The task is to predict the sub-tokens forming the method name, given the Python method body represented by AST and its node features. This task is often referred to as “code summarization”, because the model is trained to find succinct and precise description for a complete logical unit.\"\n\n\nThe score is the F1 score of sub-token prediction.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_feat' (list: #edges x #edge-features): features of edges\n* 'node\\_feat' (list: #nodes x #node-features): the nodes features, embedded\n* 'node\\_feat\\_expanded' (list: #nodes x #node-features): the nodes features, as code\n* 'node\\_is\\_attributed' (list: 1 x #nodes): ?\n* 'node\\_dfs\\_order' (list: #nodes x #1): the nodes order in the abstract tree, if parsed using a depth first search\n* 'node\\_depth' (list: #nodes x #1): the nodes depth in the abstract tree\n* 'y' (list: 1 x #tokens): contains the tokens to predict as method name\n* 'num\\_nodes' (int): number of nodes of the graph\n* 'ptr' (list: 2): index of first and last node of the graph\n* 'batch' (list: 1 x #nodes): ?", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nMore information ('node\\_feat\\_expanded') has been added through the typeidx2type and attridx2attr csv files of the repo.\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under MIT license license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ "TAGS\n#task_categories-graph-ml #license-mit #region-us \n", "### Dataset Summary\n\n\nThe 'ogbg-code2' dataset contains Abstract Syntax Trees (ASTs) obtained from 450 thousands Python method definitions, from GitHub CodeSearchNet. \"Methods are extracted from a total of 13,587 different repositories across the most popular projects on GitHub.\", by teams at Stanford, to be a part of the Open Graph Benchmark. See their website or paper for dataset postprocessing.", "### Supported Tasks and Leaderboards\n\n\n\"The task is to predict the sub-tokens forming the method name, given the Python method body represented by AST and its node features. This task is often referred to as “code summarization”, because the model is trained to find succinct and precise description for a complete logical unit.\"\n\n\nThe score is the F1 score of sub-token prediction.\n\n\nExternal Use\n------------", "### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------", "### Data Properties", "### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_feat' (list: #edges x #edge-features): features of edges\n* 'node\\_feat' (list: #nodes x #node-features): the nodes features, embedded\n* 'node\\_feat\\_expanded' (list: #nodes x #node-features): the nodes features, as code\n* 'node\\_is\\_attributed' (list: 1 x #nodes): ?\n* 'node\\_dfs\\_order' (list: #nodes x #1): the nodes order in the abstract tree, if parsed using a depth first search\n* 'node\\_depth' (list: #nodes x #1): the nodes depth in the abstract tree\n* 'y' (list: 1 x #tokens): contains the tokens to predict as method name\n* 'num\\_nodes' (int): number of nodes of the graph\n* 'ptr' (list: 2): index of first and last node of the graph\n* 'batch' (list: 1 x #nodes): ?", "### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nMore information ('node\\_feat\\_expanded') has been added through the typeidx2type and attridx2attr csv files of the repo.\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset has been released under MIT license license.", "### Contributions\n\n\nThanks to @clefourrier for adding this dataset." ]
[ 21, 103, 98, 25, 4, 301, 87, 17, 17 ]
[ "passage: TAGS\n#task_categories-graph-ml #license-mit #region-us \n### Dataset Summary\n\n\nThe 'ogbg-code2' dataset contains Abstract Syntax Trees (ASTs) obtained from 450 thousands Python method definitions, from GitHub CodeSearchNet. \"Methods are extracted from a total of 13,587 different repositories across the most popular projects on GitHub.\", by teams at Stanford, to be a part of the Open Graph Benchmark. See their website or paper for dataset postprocessing.### Supported Tasks and Leaderboards\n\n\n\"The task is to predict the sub-tokens forming the method name, given the Python method body represented by AST and its node features. This task is often referred to as “code summarization”, because the model is trained to find succinct and precise description for a complete logical unit.\"\n\n\nThe score is the F1 score of sub-token prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties" ]
3fada8f9df923b07e146b86208d9423a4e2bd109
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sunilmallya](https://huggingface.co/sunilmallya) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-43d00dce-10075336
[ "autotrain", "evaluation", "region:us" ]
2022-07-07T13:28:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "t5-base", "metrics": ["chrf"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-07T13:29:29+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @sunilmallya for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-base\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @sunilmallya for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-base\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @sunilmallya for evaluating this model." ]
[ 13, 69, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-base\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @sunilmallya for evaluating this model." ]
6e2e764d6e57e4aabf3bfaf82c1fb0fa211ea3d5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: philschmid/distilbart-cnn-12-6-samsum * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ArjunPrSarkhel](https://huggingface.co/ArjunPrSarkhel) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-7db0303b-10095338
[ "autotrain", "evaluation", "region:us" ]
2022-07-07T13:41:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "philschmid/distilbart-cnn-12-6-samsum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-07T14:18:12+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: philschmid/distilbart-cnn-12-6-samsum * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ArjunPrSarkhel for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: philschmid/distilbart-cnn-12-6-samsum\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ArjunPrSarkhel for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: philschmid/distilbart-cnn-12-6-samsum\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ArjunPrSarkhel for evaluating this model." ]
[ 13, 82, 20 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: philschmid/distilbart-cnn-12-6-samsum\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ArjunPrSarkhel for evaluating this model." ]
6cf05b33e344ff7739311ea1003e3d77d8b9bc64
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: eslamxm/mbert2mbert-finetune-fa * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@iserralv](https://huggingface.co/iserralv) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-9cdb3b8b-10115340
[ "autotrain", "evaluation", "region:us" ]
2022-07-07T13:47:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "eslamxm/mbert2mbert-finetune-fa", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-07T13:53:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: eslamxm/mbert2mbert-finetune-fa * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @iserralv for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: eslamxm/mbert2mbert-finetune-fa\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @iserralv for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: eslamxm/mbert2mbert-finetune-fa\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @iserralv for evaluating this model." ]
[ 13, 81, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: eslamxm/mbert2mbert-finetune-fa\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @iserralv for evaluating this model." ]
2ff5f5bb88d86ea40259a4c7d54e93b24a590502
# Dataset Card for Annotated dataset to assess the accuracy of the textual description of cultural heritage records ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[https://doi.org/10.6084/m9.figshare.13359104.v1](https://doi.org/10.6084/m9.figshare.13359104.v1) - **Repository:**[https://doi.org/10.6084/m9.figshare.13359104.v1](https://doi.org/10.6084/m9.figshare.13359104.v1) - **Paper:**[https://doi.org/10.1007/s00799-021-00302-1](https://doi.org/10.1007/s00799-021-00302-1) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains more than 100K textual descriptions of cultural items from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en), the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories. ### Supported Tasks and Leaderboards This dataset can be used for text classification tasks. The [paper](https://doi.org/10.1007/s00799-021-00302-1) introducing the dataset achieved an f1 score of `.783` for the task of classifying if a metadata record was low or high quality. Please see the [results table](https://link.springer.com/article/10.1007/s00799-021-00302-1/tables/4) for a full overview of the results reported in the paper. ### Languages The dataset consists of Italian metadata records. The labels are in English. ## Dataset Structure The dataset has only one configuration. ### Data Instances An example instance from the dataset: ``` python {'metadata_text': 'Figure:putto.Oggetti:ghirlanda di fiori', 'label': 0, 'source': 'OpereArteVisiva'} ``` ### Data Fields The datafields are: - `metadata_text`: this contains the metadata text which was sourced from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en) - `label`: this is the label indicating if the record is `High_Quality`, or `Low_Quality`. Most of the dataset was manually annotated, with ~30K descriptions automatically labelled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. - `source`: the source of the metadata record ### Data Splits The dataset used 'ten-fold cross-validation' and doesn't report specific splits for train, validation and test data. ## Dataset Creation The dataset was generated using records from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en). From the paper introducing the dataset: > By using the textual description encoded by the dc:description element from the Dublin Core metadata schema, we collect a dataset of 100,821 descriptions, after duplicate removal. These records include mainly data from “Musei d’Italia” and “Regione Marche” datasets, which have been chosen because they contain a high number of non-empty dc:description elements. p.221 ### Curation Rationale From the paper: > Duplicates were removed for two reasons: this reduced annotation effort in the subsequent manual annotation, and avoided that the same example appear both in the training and in the test set, a situation that could make classification biased and lead to inaccurate evaluation in supervised settings.Footnote 10 Duplicated descriptions were mainly short and of low-quality, reporting few generic words to describe an item (e.g. “Mensola.”, “Dipinto.”). p.221 ### Source Data #### Initial Data Collection and Normalization The dataset was generated using records from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en). This repository is accessible via an OAI-PMH handler or via a [SPARQL endpoint](http://dati.culturaitalia.it/sparql). As discussed above duplicates were removed from the dataset. #### Who are the source language producers? The metadata producers are staff working in Italian cultural heritage institutions. ### Annotations #### Annotation process From the paper: > "Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections." To determine the quality of the collected descriptions the authors of the paper used guidelines from the [Istituto Centrale per il Catalogo e la Documentazione](http://www.iccd.beniculturali.it/) From the paper: > "More precisely, a specific section of the guidelines addresses how to describe any cultural item, clarifying that both the object and the subject of the item must be presented in the description as follows: > Object: the object typology and shape must be described. To describe the object, the cataloguer must refer to the vocabularies provided by ICCD, using specific terminology (e.g. the technique used for paintings and drawings, or the material for the archaeological items); > Subject: the cataloguer must report the iconographic and decorative settings of the item, such as the characters of the depicted scene in a painting and their attribution. Other aspects (e.g. the history behind the painting or the painter) should not be included." p.221 [More Information Needed] #### Who are the annotators? > "The annotation is carried out by an expert in cultural heritage who collaborated in the past with Cultura Italia and has therefore in-depth knowledge of the data characteristics and of the ICCD guidelines." p.222 ### Personal and Sensitive Information No personal or sensitive information is described in the paper. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Lorenzini, Matteo - Rospocher, Marco - Tonelli, Sara ### Licensing Information [cc-by-4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{Lorenzini2020, author = "Matteo Lorenzini and Marco Rospocher and Sara Tonelli", title = "{Annotated dataset to assess the accuracy of the textual description of cultural heritage records}", year = "2020", month = "12", url = "https://figshare.com/articles/dataset/Annotated_dataset_to_assess_the_accuracy_of_the_textual_description_of_cultural_heritage_records/13359104", doi = "10.6084/m9.figshare.13359104.v1" } ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
biglam/cultural_heritage_metadata_accuracy
[ "task_categories:text-classification", "task_ids:acceptability-classification", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:it", "license:cc-by-4.0", "region:us" ]
2022-07-07T13:51:59+00:00
{"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["expert-generated"], "language": ["it"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification"], "pretty_name": "Annotated dataset to assess the accuracy of the textual description of cultural heritage records", "dataset_info": {"features": [{"name": "metadata_text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Low_Quality", "1": "High_Quality"}}}}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29309108, "num_examples": 100821}], "download_size": 16309144, "dataset_size": 29309108}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-08T15:40:03+00:00
[]
[ "it" ]
TAGS #task_categories-text-classification #task_ids-acceptability-classification #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Italian #license-cc-by-4.0 #region-us
# Dataset Card for Annotated dataset to assess the accuracy of the textual description of cultural heritage records ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage:URL - Repository:URL - Paper:URL - Leaderboard: - Point of Contact: ### Dataset Summary The dataset contains more than 100K textual descriptions of cultural items from Cultura Italia, the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories. ### Supported Tasks and Leaderboards This dataset can be used for text classification tasks. The paper introducing the dataset achieved an f1 score of '.783' for the task of classifying if a metadata record was low or high quality. Please see the results table for a full overview of the results reported in the paper. ### Languages The dataset consists of Italian metadata records. The labels are in English. ## Dataset Structure The dataset has only one configuration. ### Data Instances An example instance from the dataset: ### Data Fields The datafields are: - 'metadata_text': this contains the metadata text which was sourced from Cultura Italia - 'label': this is the label indicating if the record is 'High_Quality', or 'Low_Quality'. Most of the dataset was manually annotated, with ~30K descriptions automatically labelled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. - 'source': the source of the metadata record ### Data Splits The dataset used 'ten-fold cross-validation' and doesn't report specific splits for train, validation and test data. ## Dataset Creation The dataset was generated using records from Cultura Italia. From the paper introducing the dataset: > By using the textual description encoded by the dc:description element from the Dublin Core metadata schema, we collect a dataset of 100,821 descriptions, after duplicate removal. These records include mainly data from “Musei d’Italia” and “Regione Marche” datasets, which have been chosen because they contain a high number of non-empty dc:description elements. p.221 ### Curation Rationale From the paper: > Duplicates were removed for two reasons: this reduced annotation effort in the subsequent manual annotation, and avoided that the same example appear both in the training and in the test set, a situation that could make classification biased and lead to inaccurate evaluation in supervised settings.Footnote 10 Duplicated descriptions were mainly short and of low-quality, reporting few generic words to describe an item (e.g. “Mensola.”, “Dipinto.”). p.221 ### Source Data #### Initial Data Collection and Normalization The dataset was generated using records from Cultura Italia. This repository is accessible via an OAI-PMH handler or via a SPARQL endpoint. As discussed above duplicates were removed from the dataset. #### Who are the source language producers? The metadata producers are staff working in Italian cultural heritage institutions. ### Annotations #### Annotation process From the paper: > "Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections." To determine the quality of the collected descriptions the authors of the paper used guidelines from the Istituto Centrale per il Catalogo e la Documentazione From the paper: > "More precisely, a specific section of the guidelines addresses how to describe any cultural item, clarifying that both the object and the subject of the item must be presented in the description as follows: > Object: the object typology and shape must be described. To describe the object, the cataloguer must refer to the vocabularies provided by ICCD, using specific terminology (e.g. the technique used for paintings and drawings, or the material for the archaeological items); > Subject: the cataloguer must report the iconographic and decorative settings of the item, such as the characters of the depicted scene in a painting and their attribution. Other aspects (e.g. the history behind the painting or the painter) should not be included." p.221 #### Who are the annotators? > "The annotation is carried out by an expert in cultural heritage who collaborated in the past with Cultura Italia and has therefore in-depth knowledge of the data characteristics and of the ICCD guidelines." p.222 ### Personal and Sensitive Information No personal or sensitive information is described in the paper. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators - Lorenzini, Matteo - Rospocher, Marco - Tonelli, Sara ### Licensing Information cc-by-4.0 ### Contributions Thanks to @davanstrien for adding this dataset.
[ "# Dataset Card for Annotated dataset to assess the accuracy of the textual description of cultural heritage records", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:URL\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe dataset contains more than 100K textual descriptions of cultural items from Cultura Italia, the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used for text classification tasks. The paper introducing the dataset achieved an f1 score of '.783' for the task of classifying if a metadata record was low or high quality. Please see the results table for a full overview of the results reported in the paper.", "### Languages\n\nThe dataset consists of Italian metadata records. The labels are in English.", "## Dataset Structure\n\nThe dataset has only one configuration.", "### Data Instances\n\nAn example instance from the dataset:", "### Data Fields\n\nThe datafields are:\n\n- 'metadata_text': this contains the metadata text which was sourced from Cultura Italia\n- 'label': this is the label indicating if the record is 'High_Quality', or 'Low_Quality'. Most of the dataset was manually annotated, with ~30K descriptions automatically labelled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections.\n- 'source': the source of the metadata record", "### Data Splits\n\nThe dataset used 'ten-fold cross-validation' and doesn't report specific splits for train, validation and test data.", "## Dataset Creation\n\nThe dataset was generated using records from Cultura Italia. From the paper introducing the dataset:\n\n> By using the textual description encoded by the dc:description element from the Dublin Core metadata schema, we collect a dataset of 100,821 descriptions, after duplicate removal. These records include mainly data from “Musei d’Italia” and “Regione Marche” datasets, which have been chosen because they contain a high number of non-empty dc:description elements. p.221", "### Curation Rationale\n\nFrom the paper:\n\n> Duplicates were removed for two reasons: this reduced annotation effort in the subsequent manual annotation, and avoided that the same example appear both in the training and in the test set, a situation that could make classification biased and lead to inaccurate evaluation in supervised settings.Footnote 10 Duplicated descriptions were mainly short and of low-quality, reporting few generic words to describe an item (e.g. “Mensola.”, “Dipinto.”). p.221", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset was generated using records from Cultura Italia. This repository is accessible via an OAI-PMH handler or via a SPARQL endpoint.\n\nAs discussed above duplicates were removed from the dataset.", "#### Who are the source language producers?\n\nThe metadata producers are staff working in Italian cultural heritage institutions.", "### Annotations", "#### Annotation process\n\nFrom the paper:\n\n> \"Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections.\"\n\nTo determine the quality of the collected descriptions the authors of the paper used guidelines from the Istituto Centrale per il Catalogo e la Documentazione\n\nFrom the paper:\n\n> \"More precisely, a specific section of the guidelines addresses how to describe any cultural item, clarifying that both the object and the subject of the item must be presented in the description as follows:\n\n> Object: the object typology and shape must be described. To describe the object, the cataloguer must refer to the vocabularies provided by ICCD, using specific terminology (e.g. the technique used for paintings and drawings, or the material for the archaeological items);\n\n> Subject: the cataloguer must report the iconographic and decorative settings of the item, such as the characters of the depicted scene in a painting and their attribution. Other aspects (e.g. the history behind the painting or the painter) should not be included.\" p.221", "#### Who are the annotators?\n\n> \"The annotation is carried out by an expert in cultural heritage who collaborated in the past with Cultura Italia and has therefore in-depth knowledge of the data characteristics and of the ICCD guidelines.\" p.222", "### Personal and Sensitive Information\n\nNo personal or sensitive information is described in the paper.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n- Lorenzini, Matteo\n- Rospocher, Marco\n- Tonelli, Sara", "### Licensing Information\n\ncc-by-4.0", "### Contributions\n\nThanks to @davanstrien for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Italian #license-cc-by-4.0 #region-us \n", "# Dataset Card for Annotated dataset to assess the accuracy of the textual description of cultural heritage records", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:URL\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe dataset contains more than 100K textual descriptions of cultural items from Cultura Italia, the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used for text classification tasks. The paper introducing the dataset achieved an f1 score of '.783' for the task of classifying if a metadata record was low or high quality. Please see the results table for a full overview of the results reported in the paper.", "### Languages\n\nThe dataset consists of Italian metadata records. The labels are in English.", "## Dataset Structure\n\nThe dataset has only one configuration.", "### Data Instances\n\nAn example instance from the dataset:", "### Data Fields\n\nThe datafields are:\n\n- 'metadata_text': this contains the metadata text which was sourced from Cultura Italia\n- 'label': this is the label indicating if the record is 'High_Quality', or 'Low_Quality'. Most of the dataset was manually annotated, with ~30K descriptions automatically labelled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections.\n- 'source': the source of the metadata record", "### Data Splits\n\nThe dataset used 'ten-fold cross-validation' and doesn't report specific splits for train, validation and test data.", "## Dataset Creation\n\nThe dataset was generated using records from Cultura Italia. From the paper introducing the dataset:\n\n> By using the textual description encoded by the dc:description element from the Dublin Core metadata schema, we collect a dataset of 100,821 descriptions, after duplicate removal. These records include mainly data from “Musei d’Italia” and “Regione Marche” datasets, which have been chosen because they contain a high number of non-empty dc:description elements. p.221", "### Curation Rationale\n\nFrom the paper:\n\n> Duplicates were removed for two reasons: this reduced annotation effort in the subsequent manual annotation, and avoided that the same example appear both in the training and in the test set, a situation that could make classification biased and lead to inaccurate evaluation in supervised settings.Footnote 10 Duplicated descriptions were mainly short and of low-quality, reporting few generic words to describe an item (e.g. “Mensola.”, “Dipinto.”). p.221", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset was generated using records from Cultura Italia. This repository is accessible via an OAI-PMH handler or via a SPARQL endpoint.\n\nAs discussed above duplicates were removed from the dataset.", "#### Who are the source language producers?\n\nThe metadata producers are staff working in Italian cultural heritage institutions.", "### Annotations", "#### Annotation process\n\nFrom the paper:\n\n> \"Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections.\"\n\nTo determine the quality of the collected descriptions the authors of the paper used guidelines from the Istituto Centrale per il Catalogo e la Documentazione\n\nFrom the paper:\n\n> \"More precisely, a specific section of the guidelines addresses how to describe any cultural item, clarifying that both the object and the subject of the item must be presented in the description as follows:\n\n> Object: the object typology and shape must be described. To describe the object, the cataloguer must refer to the vocabularies provided by ICCD, using specific terminology (e.g. the technique used for paintings and drawings, or the material for the archaeological items);\n\n> Subject: the cataloguer must report the iconographic and decorative settings of the item, such as the characters of the depicted scene in a painting and their attribution. Other aspects (e.g. the history behind the painting or the painter) should not be included.\" p.221", "#### Who are the annotators?\n\n> \"The annotation is carried out by an expert in cultural heritage who collaborated in the past with Cultura Italia and has therefore in-depth knowledge of the data characteristics and of the ICCD guidelines.\" p.222", "### Personal and Sensitive Information\n\nNo personal or sensitive information is described in the paper.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n- Lorenzini, Matteo\n- Rospocher, Marco\n- Tonelli, Sara", "### Licensing Information\n\ncc-by-4.0", "### Contributions\n\nThanks to @davanstrien for adding this dataset." ]
[ 107, 27, 125, 27, 219, 76, 22, 14, 14, 129, 36, 123, 124, 4, 59, 27, 5, 271, 57, 19, 8, 7, 8, 7, 5, 22, 12, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Italian #license-cc-by-4.0 #region-us \n# Dataset Card for Annotated dataset to assess the accuracy of the textual description of cultural heritage records## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:URL\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe dataset contains more than 100K textual descriptions of cultural items from Cultura Italia, the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories.", "passage: ### Supported Tasks and Leaderboards\n\nThis dataset can be used for text classification tasks. The paper introducing the dataset achieved an f1 score of '.783' for the task of classifying if a metadata record was low or high quality. Please see the results table for a full overview of the results reported in the paper.### Languages\n\nThe dataset consists of Italian metadata records. The labels are in English.## Dataset Structure\n\nThe dataset has only one configuration.### Data Instances\n\nAn example instance from the dataset:### Data Fields\n\nThe datafields are:\n\n- 'metadata_text': this contains the metadata text which was sourced from Cultura Italia\n- 'label': this is the label indicating if the record is 'High_Quality', or 'Low_Quality'. Most of the dataset was manually annotated, with ~30K descriptions automatically labelled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections.\n- 'source': the source of the metadata record### Data Splits\n\nThe dataset used 'ten-fold cross-validation' and doesn't report specific splits for train, validation and test data.## Dataset Creation\n\nThe dataset was generated using records from Cultura Italia. From the paper introducing the dataset:\n\n> By using the textual description encoded by the dc:description element from the Dublin Core metadata schema, we collect a dataset of 100,821 descriptions, after duplicate removal. These records include mainly data from “Musei d’Italia” and “Regione Marche” datasets, which have been chosen because they contain a high number of non-empty dc:description elements. p.221### Curation Rationale\n\nFrom the paper:\n\n> Duplicates were removed for two reasons: this reduced annotation effort in the subsequent manual annotation, and avoided that the same example appear both in the training and in the test set, a situation that could make classification biased and lead to inaccurate evaluation in supervised settings.Footnote 10 Duplicated descriptions were mainly short and of low-quality, reporting few generic words to describe an item (e.g. “Mensola.”, “Dipinto.”). p.221### Source Data" ]
574793eff62a8c25e1effd4edc0a6984f4ab40ad
# Dataset Card for one-year-of-tsla-on-reddit ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/one-year-of-tsla-on-reddit?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearoftslaonreddit) - **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearoftslaonreddit) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearoftslaonreddit) ### Dataset Summary A year's worth of mentions of Tesla Inc. (TSLA) in Reddit posts and comments. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
SocialGrep/one-year-of-tsla-on-reddit
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-07-07T16:23:17+00:00
{"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"]}
2022-07-07T17:54:18+00:00
[]
[ "en" ]
TAGS #annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for one-year-of-tsla-on-reddit ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Licensing Information ## Dataset Description - Homepage: URL - Reddit downloader used: URL - Point of Contact: Website ### Dataset Summary A year's worth of mentions of Tesla Inc. (TSLA) in Reddit posts and comments. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'URL': the human-readable name of the data point's host subreddit. - 'URL': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
[ "# Dataset Card for one-year-of-tsla-on-reddit", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information", "## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website", "### Dataset Summary\n\nA year's worth of mentions of Tesla Inc. (TSLA) in Reddit posts and comments.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.", "### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.", "## Additional Information", "### Licensing Information\n\nCC-BY v4.0" ]
[ "TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for one-year-of-tsla-on-reddit", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information", "## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website", "### Dataset Summary\n\nA year's worth of mentions of Tesla Inc. (TSLA) in Reddit posts and comments.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.", "### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.", "## Additional Information", "### Licensing Information\n\nCC-BY v4.0" ]
[ 69, 18, 107, 21, 28, 8, 6, 41, 302, 5, 11 ]
[ "passage: TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n# Dataset Card for one-year-of-tsla-on-reddit## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website### Dataset Summary\n\nA year's worth of mentions of Tesla Inc. (TSLA) in Reddit posts and comments.### Languages\n\nMainly English.## Dataset Structure### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared." ]
51b0dec0bc3b5aa9939a6d610b6bdf34febc1830
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-large * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@abhijeet](https://huggingface.co/abhijeet) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-ad8ac8a3-10195347
[ "autotrain", "evaluation", "region:us" ]
2022-07-07T17:19:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "t5-large", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-07T17:51:02+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: t5-large * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @abhijeet for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-large\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @abhijeet for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-large\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @abhijeet for evaluating this model." ]
[ 13, 70, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-large\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @abhijeet for evaluating this model." ]
a7d1f3f216764b2f3c38ead13c96c8535a5bca1b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@abhijeet](https://huggingface.co/abhijeet) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-ad8ac8a3-10195349
[ "autotrain", "evaluation", "region:us" ]
2022-07-07T17:19:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "t5-base", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-07T17:27:18+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @abhijeet for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-base\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @abhijeet for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-base\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @abhijeet for evaluating this model." ]
[ 13, 69, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-base\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @abhijeet for evaluating this model." ]
72de778bde0f0b8f1d7cce7a390e7ba1b5cc961f
# Dataset Card for Stopword Lists for African Languages ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/rtatman/stopword-lists-for-african-languages - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Context: Some words, like “the” or “and” in English, are used a lot in speech and writing. For most Natural Language Processing applications, you will want to remove these very frequent words. This is usually done using a list of “stopwords” which has been complied by hand. ### Content: This project uses the source texts provided by the African Storybook Project as a corpus and provides a number of tools to extract frequency lists and lists of stopwords from this corpus for the 60+ languages covered by ASP. Included in this dataset are the following languages: * Afrikaans: stoplist and word frequency * Hausa: stoplist and word frequency * Lugbarati: word frequency only * Lugbarati (Official): word frequency only * Somali: stoplist and word frequency * Sesotho: stoplist and word frequency * Kiswahili: stoplist and word frequency * Yoruba: stoplist and word frequency * isiZulu: stoplist and word frequency Files are named using the language’s ISO code. For each language, code.txt is the list of stopwords, and code_frequency_list.txt is word frequency information. A list of ISO codes the the languages associated with them may be found in ISO_codes.csv. ### Acknowledgements: This project therefore attempts to fill in the gap in language coverage for African language stoplists by using the freely-available and open-licensed ASP Source project as a corpus. Dual-licensed under CC-BY and Apache-2.0 license. Compiled by Liam Doherty. More information and the scripts used to generate these files are available [here](https://github.com/dohliam/more-stoplists). ### Inspiration: This dataset is mainly helpful for use during NLP analysis, however there may some interesting insights in the data. * What qualities do stopwords share across languages? Given a novel language, could you predict what its stopwords should be? * What stopwords are shared across languages? * Often, related languages will have words with the same meaning and similar spellings. Can you automatically identify any of these pairs of words? ### You may also like: * [Stopword Lists for 19 Languages (mainly European and South Asian)](https://www.kaggle.com/rtatman/stopword-lists-for-19-languages) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@rtatman](https://kaggle.com/rtatman) ### Licensing Information The license for this dataset is other ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
chrisjay/ratman-stopword-lists-for-african-languages
[ "license:other", "region:us" ]
2022-07-07T18:38:15+00:00
{"license": ["other"], "kaggle_id": "rtatman/stopword-lists-for-african-languages"}
2022-10-25T09:39:52+00:00
[]
[]
TAGS #license-other #region-us
# Dataset Card for Stopword Lists for African Languages ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Context: Some words, like “the” or “and” in English, are used a lot in speech and writing. For most Natural Language Processing applications, you will want to remove these very frequent words. This is usually done using a list of “stopwords” which has been complied by hand. ### Content: This project uses the source texts provided by the African Storybook Project as a corpus and provides a number of tools to extract frequency lists and lists of stopwords from this corpus for the 60+ languages covered by ASP. Included in this dataset are the following languages: * Afrikaans: stoplist and word frequency * Hausa: stoplist and word frequency * Lugbarati: word frequency only * Lugbarati (Official): word frequency only * Somali: stoplist and word frequency * Sesotho: stoplist and word frequency * Kiswahili: stoplist and word frequency * Yoruba: stoplist and word frequency * isiZulu: stoplist and word frequency Files are named using the language’s ISO code. For each language, URL is the list of stopwords, and code_frequency_list.txt is word frequency information. A list of ISO codes the the languages associated with them may be found in ISO_codes.csv. ### Acknowledgements: This project therefore attempts to fill in the gap in language coverage for African language stoplists by using the freely-available and open-licensed ASP Source project as a corpus. Dual-licensed under CC-BY and Apache-2.0 license. Compiled by Liam Doherty. More information and the scripts used to generate these files are available here. ### Inspiration: This dataset is mainly helpful for use during NLP analysis, however there may some interesting insights in the data. * What qualities do stopwords share across languages? Given a novel language, could you predict what its stopwords should be? * What stopwords are shared across languages? * Often, related languages will have words with the same meaning and similar spellings. Can you automatically identify any of these pairs of words? ### You may also like: * Stopword Lists for 19 Languages (mainly European and South Asian) ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @rtatman ### Licensing Information The license for this dataset is other ### Contributions
[ "# Dataset Card for Stopword Lists for African Languages", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Context: \nSome words, like “the” or “and” in English, are used a lot in speech and writing. For most Natural Language Processing applications, you will want to remove these very frequent words. This is usually done using a list of “stopwords” which has been complied by hand.", "### Content: \nThis project uses the source texts provided by the African Storybook Project as a corpus and provides a number of tools to extract frequency lists and lists of stopwords from this corpus for the 60+ languages covered by ASP.\n\nIncluded in this dataset are the following languages:\n\n* Afrikaans: stoplist and word frequency\n* Hausa: stoplist and word frequency\n* Lugbarati: word frequency only\n* Lugbarati (Official): word frequency only\n* Somali: stoplist and word frequency\n* Sesotho: stoplist and word frequency\n* Kiswahili: stoplist and word frequency\n* Yoruba: stoplist and word frequency\n* isiZulu: stoplist and word frequency\n\nFiles are named using the language’s ISO code. For each language, URL is the list of stopwords, and code_frequency_list.txt is word frequency information. A list of ISO codes the the languages associated with them may be found in ISO_codes.csv.", "### Acknowledgements: \nThis project therefore attempts to fill in the gap in language coverage for African language stoplists by using the freely-available and open-licensed ASP Source project as a corpus.\nDual-licensed under CC-BY and Apache-2.0 license. Compiled by Liam Doherty. More information and the scripts used to generate these files are available here.", "### Inspiration: \nThis dataset is mainly helpful for use during NLP analysis, however there may some interesting insights in the data.\n\n* What qualities do stopwords share across languages? Given a novel language, could you predict what its stopwords should be?\n* What stopwords are shared across languages?\n* Often, related languages will have words with the same meaning and similar spellings. Can you automatically identify any of these pairs of words?", "### You may also like:\n\n* Stopword Lists for 19 Languages (mainly European and South Asian)", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @rtatman", "### Licensing Information\n\nThe license for this dataset is other", "### Contributions" ]
[ "TAGS\n#license-other #region-us \n", "# Dataset Card for Stopword Lists for African Languages", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Context: \nSome words, like “the” or “and” in English, are used a lot in speech and writing. For most Natural Language Processing applications, you will want to remove these very frequent words. This is usually done using a list of “stopwords” which has been complied by hand.", "### Content: \nThis project uses the source texts provided by the African Storybook Project as a corpus and provides a number of tools to extract frequency lists and lists of stopwords from this corpus for the 60+ languages covered by ASP.\n\nIncluded in this dataset are the following languages:\n\n* Afrikaans: stoplist and word frequency\n* Hausa: stoplist and word frequency\n* Lugbarati: word frequency only\n* Lugbarati (Official): word frequency only\n* Somali: stoplist and word frequency\n* Sesotho: stoplist and word frequency\n* Kiswahili: stoplist and word frequency\n* Yoruba: stoplist and word frequency\n* isiZulu: stoplist and word frequency\n\nFiles are named using the language’s ISO code. For each language, URL is the list of stopwords, and code_frequency_list.txt is word frequency information. A list of ISO codes the the languages associated with them may be found in ISO_codes.csv.", "### Acknowledgements: \nThis project therefore attempts to fill in the gap in language coverage for African language stoplists by using the freely-available and open-licensed ASP Source project as a corpus.\nDual-licensed under CC-BY and Apache-2.0 license. Compiled by Liam Doherty. More information and the scripts used to generate these files are available here.", "### Inspiration: \nThis dataset is mainly helpful for use during NLP analysis, however there may some interesting insights in the data.\n\n* What qualities do stopwords share across languages? Given a novel language, could you predict what its stopwords should be?\n* What stopwords are shared across languages?\n* Often, related languages will have words with the same meaning and similar spellings. Can you automatically identify any of these pairs of words?", "### You may also like:\n\n* Stopword Lists for 19 Languages (mainly European and South Asian)", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @rtatman", "### Licensing Information\n\nThe license for this dataset is other", "### Contributions" ]
[ 11, 13, 125, 25, 6, 67, 233, 88, 101, 24, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 16, 14, 5 ]
[ "passage: TAGS\n#license-other #region-us \n# Dataset Card for Stopword Lists for African Languages## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Context: \nSome words, like “the” or “and” in English, are used a lot in speech and writing. For most Natural Language Processing applications, you will want to remove these very frequent words. This is usually done using a list of “stopwords” which has been complied by hand.### Content: \nThis project uses the source texts provided by the African Storybook Project as a corpus and provides a number of tools to extract frequency lists and lists of stopwords from this corpus for the 60+ languages covered by ASP.\n\nIncluded in this dataset are the following languages:\n\n* Afrikaans: stoplist and word frequency\n* Hausa: stoplist and word frequency\n* Lugbarati: word frequency only\n* Lugbarati (Official): word frequency only\n* Somali: stoplist and word frequency\n* Sesotho: stoplist and word frequency\n* Kiswahili: stoplist and word frequency\n* Yoruba: stoplist and word frequency\n* isiZulu: stoplist and word frequency\n\nFiles are named using the language’s ISO code. For each language, URL is the list of stopwords, and code_frequency_list.txt is word frequency information. A list of ISO codes the the languages associated with them may be found in ISO_codes.csv." ]
0ff369182bf7a0741f776cff044abdf354baa589
# Dataset Card for Airbnb Stock Price ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@evangower](https://kaggle.com/evangower) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/airbnb-stock-price-new
[ "license:cc0-1.0", "region:us" ]
2022-07-07T18:51:57+00:00
{"license": ["cc0-1.0"], "kaggle_id": "evangower/airbnb-stock-price"}
2022-09-08T16:58:51+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for Airbnb Stock Price ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @evangower ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "# Dataset Card for Airbnb Stock Price", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @evangower", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for Airbnb Stock Price", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @evangower", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ 14, 8, 125, 25, 68, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 16, 18, 5 ]
[ "passage: TAGS\n#license-cc0-1.0 #region-us \n# Dataset Card for Airbnb Stock Price## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators\n\nThis dataset was shared by @evangower### Licensing Information\n\nThe license for this dataset is cc0-1.0### Contributions" ]
31099d1b3bbdd47d5cb955d1e758370c993d6ca7
# Dataset Card for Pizza or Not Pizza? ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/carlosrunner/pizza-not-pizza - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Who doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task. All images were rescaled to have a maximum side length of 512 pixels. This is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper: Bossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. "Food-101 – Mining Discriminative Components with Random Forests." In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014. The original dataset can be found in the following locations: https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/ https://www.kaggle.com/datasets/dansbecker/food-101 https://paperswithcode.com/dataset/food-101 https://www.tensorflow.org/datasets/catalog/food101 Number of instances in each class: Pizza: 983 Not Pizza: 983 ##Acknowledgements The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2]. [1] http://www.foodspotting.com/ [2] http://www.foodspotting.com/terms/ ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@carlosrunner](https://kaggle.com/carlosrunner) ### Licensing Information The license for this dataset is other ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/pizza_not_pizza
[ "license:other", "region:us" ]
2022-07-07T18:57:37+00:00
{"license": ["other"], "kaggle_id": "carlosrunner/pizza-not-pizza"}
2022-07-07T18:58:03+00:00
[]
[]
TAGS #license-other #region-us
# Dataset Card for Pizza or Not Pizza? ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Who doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task. All images were rescaled to have a maximum side length of 512 pixels. This is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper: Bossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. "Food-101 – Mining Discriminative Components with Random Forests." In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014. The original dataset can be found in the following locations: URL URL URL URL Number of instances in each class: Pizza: 983 Not Pizza: 983 ##Acknowledgements The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2]. [1] URL [2] URL ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @carlosrunner ### Licensing Information The license for this dataset is other ### Contributions
[ "# Dataset Card for Pizza or Not Pizza?", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nWho doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task.\n\nAll images were rescaled to have a maximum side length of 512 pixels. \n\nThis is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper: \nBossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. \"Food-101 – Mining Discriminative Components with Random Forests.\" In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014.\n\nThe original dataset can be found in the following locations:\nURL\nURL\nURL\nURL\n\nNumber of instances in each class:\nPizza: 983\nNot Pizza: 983", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @carlosrunner", "### Licensing Information\n\nThe license for this dataset is other", "### Contributions" ]
[ "TAGS\n#license-other #region-us \n", "# Dataset Card for Pizza or Not Pizza?", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nWho doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task.\n\nAll images were rescaled to have a maximum side length of 512 pixels. \n\nThis is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper: \nBossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. \"Food-101 – Mining Discriminative Components with Random Forests.\" In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014.\n\nThe original dataset can be found in the following locations:\nURL\nURL\nURL\nURL\n\nNumber of instances in each class:\nPizza: 983\nNot Pizza: 983", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @carlosrunner", "### Licensing Information\n\nThe license for this dataset is other", "### Contributions" ]
[ 11, 10, 125, 25, 183, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 16, 14, 5 ]
[ "passage: TAGS\n#license-other #region-us \n# Dataset Card for Pizza or Not Pizza?## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nWho doesn't like pizza? This dataset contains about 1000 images of pizza and 1000 images of dishes other than pizza. It can be used for a simple binary image classification task.\n\nAll images were rescaled to have a maximum side length of 512 pixels. \n\nThis is a subset of the Food-101 dataset. Information about the original dataset can be found in the following paper: \nBossard, Lukas, Matthieu Guillaumin, and Luc Van Gool. \"Food-101 – Mining Discriminative Components with Random Forests.\" In *European conference on computer vision*, pp. 446-461. Springer, Cham, 2014.\n\nThe original dataset can be found in the following locations:\nURL\nURL\nURL\nURL\n\nNumber of instances in each class:\nPizza: 983\nNot Pizza: 983### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators\n\nThis dataset was shared by @carlosrunner" ]
ce219b5f4b856ee0ab2868baccfc94917164d683
# Dataset Card for "simple-wiki" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://cs.pomona.edu/~dkauchak/simplification/](https://cs.pomona.edu/~dkauchak/simplification/) - **Repository:** [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) - **Paper:** [https://aclanthology.org/P11-2117/](https://aclanthology.org/P11-2117/) - **Point of Contact:** [David Kauchak]([email protected]) ### Dataset Summary This dataset contains pairs of equivalent sentences obtained from Wikipedia. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/simple-wiki") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 102225 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) #### Who are the source language producers? [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Annotations #### Annotation process [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) #### Who are the annotators? [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Personal and Sensitive Information [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Discussion of Biases [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Other Known Limitations [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ## Additional Information ### Dataset Curators [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Licensing Information [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Contributions
embedding-data/simple-wiki
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
2022-07-07T21:57:40+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/simple-wiki", "pretty_name": "simple-wiki"}
2022-08-02T02:34:17+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us
# Dataset Card for "simple-wiki" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Point of Contact: David Kauchak ### Dataset Summary This dataset contains pairs of equivalent sentences obtained from Wikipedia. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for \"simple-wiki\"", "## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: David Kauchak", "### Dataset Summary\nThis dataset contains pairs of equivalent sentences obtained from Wikipedia.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n", "# Dataset Card for \"simple-wiki\"", "## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: David Kauchak", "### Dataset Summary\nThis dataset contains pairs of equivalent sentences obtained from Wikipedia.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 43, 10, 120, 26, 22, 24, 7, 76, 59, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n# Dataset Card for \"simple-wiki\"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: David Kauchak### Dataset Summary\nThis dataset contains pairs of equivalent sentences obtained from Wikipedia.### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n- English.## Dataset Structure\nEach example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
57eee282a374a0a4e5993f4a76d5b1d2da184ae8
# Dataset Card for "sentence-compression" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/sentence-compression](https://github.com/google-research-datasets/sentence-compression) - **Repository:** [More Information Needed](https://github.com/google-research-datasets/sentence-compression) - **Paper:** [More Information Needed](https://www.aclweb.org/anthology/D13-1155/) - **Point of Contact:** [Katja Filippova]([email protected]) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 14.2 MB ### Dataset Summary Dataset with pairs of equivalent sentences. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from using the dataset. Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/sentence-compression") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 180000 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/google-research-datasets/sentence-compression) #### Who are the source language producers? [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Annotations #### Annotation process [More Information Needed](https://github.com/google-research-datasets/sentence-compression) #### Who are the annotators? [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Personal and Sensitive Information [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Discussion of Biases [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Other Known Limitations [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Licensing Information [More Information Needed](https://github.com/google-research-datasets/sentence-compression) ### Contributions
embedding-data/sentence-compression
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
2022-07-07T21:58:31+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/sentence-compression", "pretty_name": "sentence-compression"}
2022-08-02T02:02:47+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us
# Dataset Card for "sentence-compression" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: Katja Filippova - Size of downloaded dataset files: - Size of the generated dataset: - Total amount of disk used: 14.2 MB ### Dataset Summary Dataset with pairs of equivalent sentences. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from using the dataset. Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for \"sentence-compression\"", "## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Katja Filippova\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 14.2 MB", "### Dataset Summary\nDataset with pairs of equivalent sentences.\nThe dataset is provided \"AS IS\" without any warranty, express or implied. \nGoogle disclaims all liability for any damages, direct or indirect, resulting from using the dataset.\n\nDisclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n", "# Dataset Card for \"sentence-compression\"", "## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Katja Filippova\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 14.2 MB", "### Dataset Summary\nDataset with pairs of equivalent sentences.\nThe dataset is provided \"AS IS\" without any warranty, express or implied. \nGoogle disclaims all liability for any damages, direct or indirect, resulting from using the dataset.\n\nDisclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 43, 12, 120, 53, 99, 24, 7, 79, 59, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n# Dataset Card for \"sentence-compression\"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Katja Filippova\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 14.2 MB### Dataset Summary\nDataset with pairs of equivalent sentences.\nThe dataset is provided \"AS IS\" without any warranty, express or implied. \nGoogle disclaims all liability for any damages, direct or indirect, resulting from using the dataset.\n\nDisclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n- English.## Dataset Structure\nEach example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:### Curation Rationale### Source Data" ]
f771767aceedf1d2aa46c2ac29f543e2d957123e
# Dataset Card for "altlex" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/chridey/altlex](https://github.com/chridey/altlex) - **Repository:** [More Information Needed](https://github.com/chridey/altlex) - **Paper:** [https://aclanthology.org/P16-1135.pdf](https://aclanthology.org/P16-1135.pdf) - **Point of Contact:** [Christopher Hidey]([email protected]) ### Dataset Summary Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles." Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/altlex") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 112696 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/chridey/altlex) #### Who are the source language producers? [More Information Needed](https://github.com/chridey/altlex) ### Annotations #### Annotation process [More Information Needed](https://github.com/chridey/altlex) #### Who are the annotators? [More Information Needed](https://github.com/chridey/altlex) ### Personal and Sensitive Information [More Information Needed](https://github.com/chridey/altlex) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/chridey/altlex) ### Discussion of Biases [More Information Needed](https://github.com/chridey/altlex) ### Other Known Limitations [More Information Needed](https://github.com/chridey/altlex) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/chridey/altlex) ### Licensing Information [More Information Needed](https://github.com/chridey/altlex) ### Citation Information ### Contributions - [@chridey](https://github.com/chridey/altlex/commits?author=chridey) for adding this dataset to Github. ---
embedding-data/altlex
[ "language:en", "license:mit", "region:us" ]
2022-07-07T22:00:22+00:00
{"language": ["en"], "license": "mit", "paperswithcode_id": "embedding-data/altlex", "pretty_name": "altlex"}
2022-08-02T00:53:24+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
# Dataset Card for "altlex" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Point of Contact: Christopher Hidey ### Dataset Summary Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles." Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions - @chridey for adding this dataset to Github. ---
[ "# Dataset Card for \"altlex\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: Christopher Hidey", "### Dataset Summary\n\nGit repository for software associated with the 2016 ACL paper \"Identifying Causal Relations Using Parallel Wikipedia Articles.\"\n\nDisclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.\nThese steps were done by the Hugging Face team.", "### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n\n- English.", "## Dataset Structure\n\nEach example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\n\n\nReview an example 'i' with:", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n- @chridey for adding this dataset to Github.\n\n---" ]
[ "TAGS\n#language-English #license-mit #region-us \n", "# Dataset Card for \"altlex\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: Christopher Hidey", "### Dataset Summary\n\nGit repository for software associated with the 2016 ACL paper \"Identifying Causal Relations Using Parallel Wikipedia Articles.\"\n\nDisclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.\nThese steps were done by the Hugging Face team.", "### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n\n- English.", "## Dataset Structure\n\nEach example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\n\n\nReview an example 'i' with:", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n- @chridey for adding this dataset to Github.\n\n---" ]
[ 15, 9, 120, 25, 73, 24, 7, 80, 59, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 21 ]
[ "passage: TAGS\n#language-English #license-mit #region-us \n# Dataset Card for \"altlex\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: Christopher Hidey### Dataset Summary\n\nGit repository for software associated with the 2016 ACL paper \"Identifying Causal Relations Using Parallel Wikipedia Articles.\"\n\nDisclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.\nThese steps were done by the Hugging Face team.### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n\n- English.## Dataset Structure\n\nEach example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\n\n\nReview an example 'i' with:### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators" ]
5bdbcb6537d782d1f3158a67480bcce6a8cfd63d
# Dataset Card for "flickr30k-captions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Usage Example](#usage-example) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://shannon.cs.illinois.edu/DenotationGraph/](https://shannon.cs.illinois.edu/DenotationGraph/) - **Repository:** [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) - **Paper:** [https://transacl.org/ojs/index.php/tacl/article/view/229/33](https://transacl.org/ojs/index.php/tacl/article/view/229/33) - **Point of Contact:** [Peter Young]([email protected]), [Alice Lai]([email protected]), [Micah Hodosh]([email protected]), [Julia Hockenmaier]([email protected]) ### Dataset Summary We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": ``` {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ... {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/flickr30k-captions") ``` The dataset is loaded as a `DatasetDict` has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 31783 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) #### Who are the source language producers? [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Annotations #### Annotation process [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) #### Who are the annotators? [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Personal and Sensitive Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Discussion of Biases [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Other Known Limitations [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ## Additional Information ### Dataset Curators [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Licensing Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Citation Information [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/) ### Contributions Thanks to [Peter Young]([email protected]), [Alice Lai]([email protected]), [Micah Hodosh]([email protected]), [Julia Hockenmaier]([email protected]) for adding this dataset.
embedding-data/flickr30k_captions_quintets
[ "language:en", "license:mit", "region:us" ]
2022-07-07T22:09:35+00:00
{"language": ["en"], "license": "mit", "paperswithcode_id": "embedding-data/flickr30k-captions", "pretty_name": "flickr30k-captions"}
2022-08-02T00:59:48+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
# Dataset Card for "flickr30k-captions" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Usage Example - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Point of Contact: Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier ### Dataset Summary We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' has the format: Review an example 'i' with: ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier for adding this dataset.
[ "# Dataset Card for \"flickr30k-captions\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Usage Example\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier", "### Dataset Summary\n\nWe propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.\n\nDisclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.", "### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n\n- English.", "## Dataset Structure\n\nEach example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' has the format:\n\n\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier for adding this dataset." ]
[ "TAGS\n#language-English #license-mit #region-us \n", "# Dataset Card for \"flickr30k-captions\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Usage Example\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier", "### Dataset Summary\n\nWe propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.\n\nDisclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.", "### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n\n- English.", "## Dataset Structure\n\nEach example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' has the format:\n\n\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier for adding this dataset." ]
[ 15, 15, 125, 38, 165, 24, 7, 81, 58, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 29 ]
[ "passage: TAGS\n#language-English #license-mit #region-us \n# Dataset Card for \"flickr30k-captions\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Usage Example\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Point of Contact: Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier### Dataset Summary\n\nWe propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.\n\nDisclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n\n- English.## Dataset Structure\n\nEach example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences." ]
6ac46940576ff32018a57dca17314b4c8142a2e7
# Dataset Card for "coco_captions" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://cocodataset.org/#home](https://cocodataset.org/#home) - **Repository:** [https://github.com/cocodataset/cocodataset.github.io](https://github.com/cocodataset/cocodataset.github.io) - **Paper:** [More Information Needed](https://arxiv.org/abs/1405.0312) - **Point of Contact:** [[email protected]]([email protected]) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 6.32 MB ### Dataset Summary COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks. Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": ``` {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ... {"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/coco_captions") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 82783 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Data Instances [More Information Needed](https://cocodataset.org/#format-data) ### Data Splits [More Information Needed](https://cocodataset.org/#format-data) ## Dataset Creation ### Curation Rationale [More Information Needed](https://cocodataset.org/#home) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://cocodataset.org/#home) #### Who are the source language producers? [More Information Needed](https://cocodataset.org/#home) ### Annotations #### Annotation process [More Information Needed](https://cocodataset.org/#home) #### Who are the annotators? [More Information Needed](https://cocodataset.org/#home) ### Personal and Sensitive Information [More Information Needed](https://cocodataset.org/#home) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://cocodataset.org/#home) ### Discussion of Biases [More Information Needed](https://cocodataset.org/#home) ### Other Known Limitations [More Information Needed](https://cocodataset.org/#home) ## Additional Information ### Dataset Curators [More Information Needed](https://cocodataset.org/#home) ### Licensing Information The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information [More Information Needed](https://cocodataset.org/#home) ### Contributions Thanks to: - Tsung-Yi Lin - Google Brain - Genevieve Patterson - MSR, Trash TV - Matteo R. - Ronchi Caltech - Yin Cui - Google - Michael Maire - TTI-Chicago - Serge Belongie - Cornell Tech - Lubomir Bourdev - WaveOne, Inc. - Ross Girshick - FAIR - James Hays - Georgia Tech - Pietro Perona - Caltech - Deva Ramanan - CMU - Larry Zitnick - FAIR - Piotr Dollár - FAIR for adding this dataset.
embedding-data/coco_captions_quintets
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "arxiv:1405.0312", "region:us" ]
2022-07-07T22:12:19+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/coco_captions", "pretty_name": "coco_captions"}
2022-08-02T01:18:54+00:00
[ "1405.0312" ]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-1405.0312 #region-us
# Dataset Card for "coco_captions" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Point of Contact: info@URL - Size of downloaded dataset files: - Size of the generated dataset: - Total amount of disk used: 6.32 MB ### Dataset Summary COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks. Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value": This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Data Instances ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License ### Contributions Thanks to: - Tsung-Yi Lin - Google Brain - Genevieve Patterson - MSR, Trash TV - Matteo R. - Ronchi Caltech - Yin Cui - Google - Michael Maire - TTI-Chicago - Serge Belongie - Cornell Tech - Lubomir Bourdev - WaveOne, Inc. - Ross Girshick - FAIR - James Hays - Georgia Tech - Pietro Perona - Caltech - Deva Ramanan - CMU - Larry Zitnick - FAIR - Piotr Dollár - FAIR for adding this dataset.
[ "# Dataset Card for \"coco_captions\"", "## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact: info@URL\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 6.32 MB", "### Dataset Summary\n\nCOCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.\n\nDisclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n\n- English.", "## Dataset Structure\n\nEach example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\n\n\nReview an example 'i' with:", "### Data Instances", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe annotations in this dataset along with this website belong to the COCO Consortium \nand are licensed under a Creative Commons Attribution 4.0 License", "### Contributions\n\nThanks to:\n\n- Tsung-Yi Lin - Google Brain\n- Genevieve Patterson - MSR, Trash TV\n- Matteo R. - Ronchi Caltech\n- Yin Cui - Google\n- Michael Maire - TTI-Chicago\n- Serge Belongie - Cornell Tech\n- Lubomir Bourdev - WaveOne, Inc.\n- Ross Girshick - FAIR\n- James Hays - Georgia Tech\n- Pietro Perona - Caltech\n- Deva Ramanan - CMU\n- Larry Zitnick - FAIR\n- Piotr Dollár - FAIR\n\nfor adding this dataset." ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-1405.0312 #region-us \n", "# Dataset Card for \"coco_captions\"", "## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact: info@URL\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 6.32 MB", "### Dataset Summary\n\nCOCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.\n\nDisclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n\n- English.", "## Dataset Structure\n\nEach example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\n\n\nReview an example 'i' with:", "### Data Instances", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe annotations in this dataset along with this website belong to the COCO Consortium \nand are licensed under a Creative Commons Attribution 4.0 License", "### Contributions\n\nThanks to:\n\n- Tsung-Yi Lin - Google Brain\n- Genevieve Patterson - MSR, Trash TV\n- Matteo R. - Ronchi Caltech\n- Yin Cui - Google\n- Michael Maire - TTI-Chicago\n- Serge Belongie - Cornell Tech\n- Lubomir Bourdev - WaveOne, Inc.\n- Ross Girshick - FAIR\n- James Hays - Georgia Tech\n- Pietro Perona - Caltech\n- Deva Ramanan - CMU\n- Larry Zitnick - FAIR\n- Piotr Dollár - FAIR\n\nfor adding this dataset." ]
[ 52, 12, 120, 53, 83, 24, 7, 81, 59, 6, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 38, 131 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-1405.0312 #region-us \n# Dataset Card for \"coco_captions\"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Point of Contact: info@URL\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 6.32 MB### Dataset Summary\n\nCOCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.\n\nDisclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.### Supported Tasks\n\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n\n- English.## Dataset Structure\n\nEach example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\":\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\n\n\nReview an example 'i' with:### Data Instances### Data Splits## Dataset Creation" ]
77843c7e3d1aaa2c6310e29c040b30275139926c
# Mutopia Guitar Dataset ## Table of Contents - [Dataset Card Creation Guide](#mutopia-guitar-dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description - **Homepage:** [Mutopia Project](https://www.mutopiaproject.org/) - **Repository implementation of the paper:** [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset](https://github.com/AI-Guru/MMM-JSB) - **Based on Paper:** [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048) - **Point of Contact:** [Juan Carlos Piñeros](https://www.linkedin.com/in/juancarlospinerosp/) ### Dataset Summary Mutopia guitar dataset consists of the soloist guitar pieces of the [Mutopia Project](https://www.mutopiaproject.org/). I encoded the MIDI files into text tokens using the excellent [implementation](https://github.com/AI-Guru/MMM-JSB) of Dr. Tristan Beheren of the paper: [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048). The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani. ### Supported Tasks and Leaderboards Anyone interested can use the dataset to train a model for symbolic music generation, which consists in treating symbols for music sounds (notes) as text tokens. Then, one can implement a generative model using NLP techniques, such as Transformers. ## Dataset Structure ### Data Instances Each guitar piece is represented as a line of text that contains a series of tokens, for instance: PIECE_START: Where the piece begins PIECE_ENDS: Where the piece ends TIME_SIGNATURE: Time signature for the piece BPM: Tempo of the piece BAR_START: Begining of a new bar NOTE_ON: Start of a new musical note specifying its MIDI note number TIME_DELTA: Duration until the next event NOTE_OFF: End of musical note specifying its MIDI note number ``` { 'text': PIECE_START TIME_SIGNATURE=2_4 BPM=74 TRACK_START INST=0 DENSITY=4 BAR_START NOTE_ON=52 TIME_DELTA=2.0 NOTE_OFF=52 NOTE_ON=45 NOTE_ON=49 TIME_DELTA=2.0 NOTE_OFF=49 NOTE_ON=52 TIME_DELTA=2.0 NOTE_OFF=45 NOTE_ON=47 NOTE_OFF=52 NOTE_ON=44 TIME_DELTA=2.0, ... } ``` ### Data Fields - `text`: Sequence of tokens that represent the guitar piece as explained in the paper [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048). ### Data Splits There are, at this moment, 395 MIDI guitar files in the Mutopia Project. I removed files of pieces that were not music for soloist guitar. After this removal, there were 372 MIDI files. I used an 80/20 split and augmented the training dataset by transposing the piece 1 octave above and below (24 semitones). The final result is then: **Train dataset:** 7325 pieces **Test dataset:** 74 pieces
juancopi81/mutopia_guitar_dataset
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:other-music", "license:cc", "arxiv:2008.06048", "region:us" ]
2022-07-07T23:06:39+00:00
{"license": ["cc"], "multilinguality": ["other-music"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Mutopia Guitar Dataset"}
2022-07-21T23:09:34+00:00
[ "2008.06048" ]
[]
TAGS #task_categories-text-generation #task_ids-language-modeling #multilinguality-other-music #license-cc #arxiv-2008.06048 #region-us
# Mutopia Guitar Dataset ## Table of Contents - Dataset Card Creation Guide - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Dataset Structure - Data Instances - Data Fields - Data Splits ## Dataset Description - Homepage: Mutopia Project - Repository implementation of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset - Based on Paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer - Point of Contact: Juan Carlos Piñeros ### Dataset Summary Mutopia guitar dataset consists of the soloist guitar pieces of the Mutopia Project. I encoded the MIDI files into text tokens using the excellent implementation of Dr. Tristan Beheren of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer. The dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani. ### Supported Tasks and Leaderboards Anyone interested can use the dataset to train a model for symbolic music generation, which consists in treating symbols for music sounds (notes) as text tokens. Then, one can implement a generative model using NLP techniques, such as Transformers. ## Dataset Structure ### Data Instances Each guitar piece is represented as a line of text that contains a series of tokens, for instance: PIECE_START: Where the piece begins PIECE_ENDS: Where the piece ends TIME_SIGNATURE: Time signature for the piece BPM: Tempo of the piece BAR_START: Begining of a new bar NOTE_ON: Start of a new musical note specifying its MIDI note number TIME_DELTA: Duration until the next event NOTE_OFF: End of musical note specifying its MIDI note number ### Data Fields - 'text': Sequence of tokens that represent the guitar piece as explained in the paper MMM: Exploring Conditional Multi-Track Music Generation with the Transformer. ### Data Splits There are, at this moment, 395 MIDI guitar files in the Mutopia Project. I removed files of pieces that were not music for soloist guitar. After this removal, there were 372 MIDI files. I used an 80/20 split and augmented the training dataset by transposing the piece 1 octave above and below (24 semitones). The final result is then: Train dataset: 7325 pieces Test dataset: 74 pieces
[ "# Mutopia Guitar Dataset", "## Table of Contents\n- Dataset Card Creation Guide\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits", "## Dataset Description\n\n- Homepage: Mutopia Project\n- Repository implementation of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset\n- Based on Paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer\n- Point of Contact: Juan Carlos Piñeros", "### Dataset Summary\n\nMutopia guitar dataset consists of the soloist guitar pieces of the Mutopia Project. I encoded the MIDI files into text tokens using the excellent implementation of Dr. Tristan Beheren of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer. \n\nThe dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.", "### Supported Tasks and Leaderboards\n\nAnyone interested can use the dataset to train a model for symbolic music generation, which consists in treating symbols for music sounds (notes) as text tokens. Then, one can implement a generative model using NLP techniques, such as Transformers.", "## Dataset Structure", "### Data Instances\n\nEach guitar piece is represented as a line of text that contains a series of tokens, for instance:\n\nPIECE_START: Where the piece begins\nPIECE_ENDS: Where the piece ends\nTIME_SIGNATURE: Time signature for the piece\nBPM: Tempo of the piece\nBAR_START: Begining of a new bar\nNOTE_ON: Start of a new musical note specifying its MIDI note number\nTIME_DELTA: Duration until the next event\nNOTE_OFF: End of musical note specifying its MIDI note number", "### Data Fields\n\n- 'text': Sequence of tokens that represent the guitar piece as explained in the paper MMM: Exploring Conditional Multi-Track Music Generation with the Transformer.", "### Data Splits\n\nThere are, at this moment, 395 MIDI guitar files in the Mutopia Project. I removed files of pieces that were not music for soloist guitar. After this removal, there were 372 MIDI files.\n\nI used an 80/20 split and augmented the training dataset by transposing the piece 1 octave above and below (24 semitones). The final result is then:\n\nTrain dataset: 7325 pieces\nTest dataset: 74 pieces" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-other-music #license-cc #arxiv-2008.06048 #region-us \n", "# Mutopia Guitar Dataset", "## Table of Contents\n- Dataset Card Creation Guide\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits", "## Dataset Description\n\n- Homepage: Mutopia Project\n- Repository implementation of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset\n- Based on Paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer\n- Point of Contact: Juan Carlos Piñeros", "### Dataset Summary\n\nMutopia guitar dataset consists of the soloist guitar pieces of the Mutopia Project. I encoded the MIDI files into text tokens using the excellent implementation of Dr. Tristan Beheren of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer. \n\nThe dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.", "### Supported Tasks and Leaderboards\n\nAnyone interested can use the dataset to train a model for symbolic music generation, which consists in treating symbols for music sounds (notes) as text tokens. Then, one can implement a generative model using NLP techniques, such as Transformers.", "## Dataset Structure", "### Data Instances\n\nEach guitar piece is represented as a line of text that contains a series of tokens, for instance:\n\nPIECE_START: Where the piece begins\nPIECE_ENDS: Where the piece ends\nTIME_SIGNATURE: Time signature for the piece\nBPM: Tempo of the piece\nBAR_START: Begining of a new bar\nNOTE_ON: Start of a new musical note specifying its MIDI note number\nTIME_DELTA: Duration until the next event\nNOTE_OFF: End of musical note specifying its MIDI note number", "### Data Fields\n\n- 'text': Sequence of tokens that represent the guitar piece as explained in the paper MMM: Exploring Conditional Multi-Track Music Generation with the Transformer.", "### Data Splits\n\nThere are, at this moment, 395 MIDI guitar files in the Mutopia Project. I removed files of pieces that were not music for soloist guitar. After this removal, there were 372 MIDI files.\n\nI used an 80/20 split and augmented the training dataset by transposing the piece 1 octave above and below (24 semitones). The final result is then:\n\nTrain dataset: 7325 pieces\nTest dataset: 74 pieces" ]
[ 48, 6, 54, 78, 102, 68, 6, 123, 44, 101 ]
[ "passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-other-music #license-cc #arxiv-2008.06048 #region-us \n# Mutopia Guitar Dataset## Table of Contents\n- Dataset Card Creation Guide\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits## Dataset Description\n\n- Homepage: Mutopia Project\n- Repository implementation of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset\n- Based on Paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer\n- Point of Contact: Juan Carlos Piñeros### Dataset Summary\n\nMutopia guitar dataset consists of the soloist guitar pieces of the Mutopia Project. I encoded the MIDI files into text tokens using the excellent implementation of Dr. Tristan Beheren of the paper: MMM: Exploring Conditional Multi-Track Music Generation with the Transformer. \n\nThe dataset mainly contains guitar music from western classical composers, such as Sor, Aguado, Carcassi, and Giuliani.### Supported Tasks and Leaderboards\n\nAnyone interested can use the dataset to train a model for symbolic music generation, which consists in treating symbols for music sounds (notes) as text tokens. Then, one can implement a generative model using NLP techniques, such as Transformers.## Dataset Structure### Data Instances\n\nEach guitar piece is represented as a line of text that contains a series of tokens, for instance:\n\nPIECE_START: Where the piece begins\nPIECE_ENDS: Where the piece ends\nTIME_SIGNATURE: Time signature for the piece\nBPM: Tempo of the piece\nBAR_START: Begining of a new bar\nNOTE_ON: Start of a new musical note specifying its MIDI note number\nTIME_DELTA: Duration until the next event\nNOTE_OFF: End of musical note specifying its MIDI note number" ]
faf55c14e79d6d8d20d1e11ab26f1095dc2a78b4
# Dataset Card for "SPECTER" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/allenai/specter](https://github.com/allenai/specter) - **Repository:** [More Information Needed](https://github.com/allenai/specter/blob/master/README.md) - **Paper:** [More Information Needed](https://arxiv.org/pdf/2004.07180.pdf) - **Point of Contact:** [@armancohan](https://github.com/armancohan), [@sergeyf](https://github.com/sergeyf), [@haroldrubio](https://github.com/haroldrubio), [@jinamshah](https://github.com/jinamshah) ### Dataset Summary Dataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers. Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ## Dataset Structure Each example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". Each example is a dictionary with a key, "set", containing a list of three sentences (anchor, positive, and negative): ``` {"set": [anchor, positive, negative]} {"set": [anchor, positive, negative]} ... {"set": [anchor, positive, negative]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/SPECTER") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 684100 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://github.com/allenai/specter) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/allenai/specter) #### Who are the source language producers? [More Information Needed](https://github.com/allenai/specter) ### Annotations #### Annotation process [More Information Needed](https://github.com/allenai/specter) #### Who are the annotators? [More Information Needed](https://github.com/allenai/specter) ### Personal and Sensitive Information [More Information Needed](https://github.com/allenai/specter) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/allenai/specter) ### Discussion of Biases [More Information Needed](https://github.com/allenai/specter) ### Other Known Limitations [More Information Needed](https://github.com/allenai/specter) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/allenai/specter) ### Licensing Information [More Information Needed](https://github.com/allenai/specter) ### Citation Information ### Contributions
embedding-data/SPECTER
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "arxiv:2004.07180", "region:us" ]
2022-07-08T01:41:34+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/SPECTER", "pretty_name": "SPECTER"}
2022-08-02T02:45:52+00:00
[ "2004.07180" ]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-2004.07180 #region-us
# Dataset Card for "SPECTER" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: @armancohan, @sergeyf, @haroldrubio, @jinamshah ### Dataset Summary Dataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers. Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ## Dataset Structure Each example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". Each example is a dictionary with a key, "set", containing a list of three sentences (anchor, positive, and negative): This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for \"SPECTER\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: @armancohan, @sergeyf, @haroldrubio, @jinamshah", "### Dataset Summary\n\nDataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.\n\nDisclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "## Dataset Structure\nEach example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nEach example is a dictionary with a key, \"set\", containing a list of three sentences (anchor, positive, and negative):\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-2004.07180 #region-us \n", "# Dataset Card for \"SPECTER\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: @armancohan, @sergeyf, @haroldrubio, @jinamshah", "### Dataset Summary\n\nDataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.\n\nDisclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "## Dataset Structure\nEach example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nEach example is a dictionary with a key, \"set\", containing a list of three sentences (anchor, positive, and negative):\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 52, 9, 120, 42, 74, 106, 59, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-2004.07180 #region-us \n# Dataset Card for \"SPECTER\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: @armancohan, @sergeyf, @haroldrubio, @jinamshah### Dataset Summary\n\nDataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.\n\nDisclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.## Dataset Structure\nEach example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nEach example is a dictionary with a key, \"set\", containing a list of three sentences (anchor, positive, and negative):\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process" ]
5b4a6177016e743c5d3ea50f144b202a80923777
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Abhijeet3922](https://huggingface.co/Abhijeet3922) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-e02a2fb8-10255357
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T01:56:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-08T02:47:13+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Abhijeet3922 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Abhijeet3922 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Abhijeet3922 for evaluating this model." ]
[ 13, 97, 19 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Abhijeet3922 for evaluating this model." ]
f475d9ca10f6eae1f39e756d14610ce7c5bb515c
# Dataset Card for "QQP_triplets" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - **Repository:** [More Information Needed](http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv) - **Paper:** [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) - **Point of Contact:** [Kornél Csernai](https://www.quora.com/profile/Korn%C3%A9l-Csernai), [Nikhil Dandekar](https://www.quora.com/profile/Nikhil-Dandekar), [Shankar Iyer](https://www.quora.com/profile/Shankar-Iyer-5) ### Dataset Summary This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative). Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences. ``` {"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]} {"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]} ... {"query": [anchor], "pos": [positive], "neg": [negative1, negative2, ..., negativeN]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/QQP_triplets") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 101762 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) #### Who are the source language producers? [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Annotations #### Annotation process [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) #### Who are the annotators? [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Personal and Sensitive Information [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Discussion of Biases [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Other Known Limitations Here are a few important things to keep in mind about this dataset: - Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. Therefore, we supplemented the dataset with negative examples. - One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, are not truly semantically equivalent. - The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that have been applied to the final dataset (e.g., removal of questions with extremely long question details). - The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect. ## Additional Information ### Dataset Curators [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Licensing Information [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Citation Information [More Information Needed](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) ### Contributions Thanks to [Kornél Csernai](https://www.quora.com/profile/Korn%C3%A9l-Csernai), [Nikhil Dandekar](https://www.quora.com/profile/Nikhil-Dandekar), [Shankar Iyer](https://www.quora.com/profile/Shankar-Iyer-5) for adding this dataset.
embedding-data/QQP_triplets
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
2022-07-08T02:15:59+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/QQP_triplets", "pretty_name": "QQP_triplets"}
2022-08-02T02:14:14+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us
# Dataset Card for "QQP_triplets" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: Kornél Csernai, Nikhil Dandekar, Shankar Iyer ### Dataset Summary This dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative). Disclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences. This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Here are a few important things to keep in mind about this dataset: - Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. Therefore, we supplemented the dataset with negative examples. - One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, are not truly semantically equivalent. - The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that have been applied to the final dataset (e.g., removal of questions with extremely long question details). - The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect. ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to Kornél Csernai, Nikhil Dandekar, Shankar Iyer for adding this dataset.
[ "# Dataset Card for \"QQP_triplets\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Kornél Csernai, Nikhil Dandekar, Shankar Iyer", "### Dataset Summary\n\nThis dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative).\n\nDisclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences. \n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nHere are a few important things to keep in mind about this dataset:\n\n- Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. \nTherefore, we supplemented the dataset with negative examples. \n- One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, \nare not truly semantically equivalent.\n- The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that\nhave been applied to the final dataset (e.g., removal of questions with extremely long question details).\n- The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect.", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to Kornél Csernai, Nikhil Dandekar, Shankar Iyer for adding this dataset." ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n", "# Dataset Card for \"QQP_triplets\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Kornél Csernai, Nikhil Dandekar, Shankar Iyer", "### Dataset Summary\n\nThis dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative).\n\nDisclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences. \n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nHere are a few important things to keep in mind about this dataset:\n\n- Our original sampling method returned an imbalanced dataset with many more true examples of duplicate pairs than non-duplicates. \nTherefore, we supplemented the dataset with negative examples. \n- One source of negative examples were pairs of “related questions” which, although pertaining to similar topics, \nare not truly semantically equivalent.\n- The distribution of questions in the dataset should not be taken to be representative of the distribution of questions asked on Quora. This is, in part, because of the combination of sampling procedures and also due to some sanitization measures that\nhave been applied to the final dataset (e.g., removal of questions with extremely long question details).\n- The ground-truth labels contain some amount of noise: they are not guaranteed to be perfect.", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to Kornél Csernai, Nikhil Dandekar, Shankar Iyer for adding this dataset." ]
[ 43, 13, 120, 37, 87, 24, 7, 88, 59, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 202, 5, 6, 6, 29 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n# Dataset Card for \"QQP_triplets\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Kornél Csernai, Nikhil Dandekar, Shankar Iyer### Dataset Summary\n\nThis dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. The data is organized as triplets (anchor, positive, negative).\n\nDisclaimer: The team releasing Quora data did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n- English.## Dataset Structure\nEach example is a dictionary with three keys (query, pos, and neg) containing a list each (triplets). The first key contains an anchor sentence, the second a positive sentence, and the third a list of negative sentences. \n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train them.### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:### Curation Rationale### Source Data#### Initial Data Collection and Normalization" ]
6916e1cc1f7e94d352eba0c4da9db405e3b8e379
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275361
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T03:33:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-large", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T07:32:13+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-large\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-large\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 74, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-large\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
9b90e970f216fbcc654581cab1b16fbe99dd5f17
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275362
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T03:33:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T04:16:37+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 74, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
5c1dd64af4db75264190473c180e043a7609fc02
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275363
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T03:34:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T04:51:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 77, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
45c22e873e5c978481df69a3b235d656bbe331be
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-reddit_tifu * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275364
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T03:34:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-reddit_tifu", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T04:59:46+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-reddit_tifu * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-reddit_tifu\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-reddit_tifu\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 77, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-reddit_tifu\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
6fa498e8bd014bc29d0fb9f633bb640915482874
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275365
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T04:16:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T04:41:09+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 75, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
35bc3b2798ce0851a47cf62d448ccf280ac36ed1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: knkarthick/bart-large-xsum-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275366
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T04:41:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "knkarthick/bart-large-xsum-samsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T05:11:02+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: knkarthick/bart-large-xsum-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: knkarthick/bart-large-xsum-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: knkarthick/bart-large-xsum-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 81, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: knkarthick/bart-large-xsum-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
9a4f33a45599118ee2c5d516f1f3739fee07a683
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275367
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T04:52:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-08T05:35:14+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ikadebi for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
[ 13, 75, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ikadebi for evaluating this model." ]
9353fdfd146e16bf3e4e1ba1262e694a659be9f5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sheikmohdimran](https://huggingface.co/sheikmohdimran) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-3c39b441-10285368
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T05:00:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-08T05:34:29+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @sheikmohdimran for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @sheikmohdimran for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @sheikmohdimran for evaluating this model." ]
[ 13, 74, 19 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @sheikmohdimran for evaluating this model." ]
20eb88d4d2d3c552437189d4710e720a9e7dd4eb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-wikihow * Dataset: xsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Scofield](https://huggingface.co/Scofield) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-a94d90d8-10385372
[ "autotrain", "evaluation", "region:us" ]
2022-07-08T09:28:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-wikihow", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-08T10:33:26+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-wikihow * Dataset: xsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Scofield for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-wikihow\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Scofield for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-wikihow\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Scofield for evaluating this model." ]
[ 13, 74, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-wikihow\n* Dataset: xsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Scofield for evaluating this model." ]
fc23600643bde1d53e2e7ffd595c946167a7ade7
# GEM Submission Submission name: This is a test submission 3
GEM-submissions/lewtun__this-is-a-test-submission-3__1657282248
[ "benchmark:gem", "evaluation", "benchmark", "region:us" ]
2022-07-08T11:10:50+00:00
{"benchmark": "gem", "type": "prediction", "submission_name": "This is a test submission 3", "tags": ["evaluation", "benchmark"]}
2022-07-08T11:10:53+00:00
[]
[]
TAGS #benchmark-gem #evaluation #benchmark #region-us
# GEM Submission Submission name: This is a test submission 3
[ "# GEM Submission\n\nSubmission name: This is a test submission 3" ]
[ "TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n", "# GEM Submission\n\nSubmission name: This is a test submission 3" ]
[ 19, 16 ]
[ "passage: TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n# GEM Submission\n\nSubmission name: This is a test submission 3" ]
c7ee9e962c2c38438ff0027c362b5f4c806f0f50
# Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_unique
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T15:21:01+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-unique"}
2022-08-04T19:16:10+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-unique\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-unique\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 28, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-unique\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
be7d206d5e5dea14bab0a989c83aefb7ae1d0ebf
# Dataset Card for "Amazon-QA" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://jmcauley.ucsd.edu/data/amazon/qa/](http://jmcauley.ucsd.edu/data/amazon/qa/) - **Repository:** [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [Julian McAuley](https://cseweb.ucsd.edu//~jmcauley/#) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 247 MB ### Dataset Summary This dataset contains Question and Answer data from Amazon. Disclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary: ``` {"query": [sentence_1], "pos": [sentence_2]} {"query": [sentence_1], "pos": [sentence_2]} ... {"query": [sentence_1], "pos": [sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/Amazon-QA") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['query', 'pos'], num_rows: 1095290 }) }) ``` Review an example `i` with: ```python dataset["train"][0] ``` ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) #### Who are the source language producers? [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Annotations #### Annotation process [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) #### Who are the annotators? [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Personal and Sensitive Information [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Discussion of Biases [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Other Known Limitations [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/s) ## Additional Information ### Dataset Curators [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Licensing Information [More Information Needed](http://jmcauley.ucsd.edu/data/amazon/qa/) ### Citation Information ### Contributions
embedding-data/Amazon-QA
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
2022-07-08T16:03:12+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/Amazon-QA", "pretty_name": "Amazon-QA"}
2022-08-02T02:36:27+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us
# Dataset Card for "Amazon-QA" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: Julian McAuley - Size of downloaded dataset files: - Size of the generated dataset: - Total amount of disk used: 247 MB ### Dataset Summary This dataset contains Question and Answer data from Amazon. Disclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary: This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for \"Amazon-QA\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Julian McAuley\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 247 MB", "### Dataset Summary\n\nThis dataset contains Question and Answer data from Amazon.\n\nDisclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary:\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n", "# Dataset Card for \"Amazon-QA\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Julian McAuley\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 247 MB", "### Dataset Summary\n\nThis dataset contains Question and Answer data from Amazon.\n\nDisclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary:\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 43, 10, 120, 52, 57, 24, 7, 62, 59, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n# Dataset Card for \"Amazon-QA\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Julian McAuley\n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 247 MB### Dataset Summary\n\nThis dataset contains Question and Answer data from Amazon.\n\nDisclaimer: The team releasing Amazon-QA did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n- English.## Dataset Structure\nEach example in the dataset contains pairs of query and answer sentences and is formatted as a dictionary:\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
04beaebf206b2ddfba715d8307b7cb48c4c01468
# Dataset Card for "PAQ_pairs" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/facebookresearch/PAQ](https://github.com/facebookresearch/PAQ) - **Repository:** [More Information Needed](https://github.com/facebookresearch/PAQ) - **Paper:** [More Information Needed](https://github.com/facebookresearch/PAQ) - **Point of Contact:** [More Information Needed](https://github.com/facebookresearch/PAQ) - **Size of downloaded dataset files:** - **Size of the generated dataset:** - **Total amount of disk used:** 21 Bytes ### Dataset Summary Pairs questions and answers obtained from Wikipedia. Disclaimer: The team releasing PAQ QA pairs did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". The first sentence is a question and the second an answer; thus, both sentences would be similar. ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/PAQ_pairs") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 64371441 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Data Instances [More Information Needed](https://github.com/facebookresearch/PAQ) ### Data Fields [More Information Needed](https://github.com/facebookresearch/PAQ) ### Data Splits [More Information Needed](https://github.com/facebookresearch/PAQ) ## Dataset Creation [More Information Needed](https://github.com/facebookresearch/PAQ) ### Curation Rationale [More Information Needed](https://github.com/facebookresearch/PAQ) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/facebookresearch/PAQ) #### Who are the source language producers? [More Information Needed](https://github.com/facebookresearch/PAQ) ### Annotations #### Annotation process [More Information Needed](https://github.com/facebookresearch/PAQ) #### Who are the annotators? [More Information Needed](https://github.com/facebookresearch/PAQ) ### Personal and Sensitive Information [More Information Needed](https://github.com/facebookresearch/PAQ) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/facebookresearch/PAQ) ### Discussion of Biases [More Information Needed](https://github.com/facebookresearch/PAQ) ### Other Known Limitations [More Information Needed](https://github.com/facebookresearch/PAQ) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/facebookresearch/PAQ) ### Licensing Information The PAQ QA-pairs and metadata is licensed under [CC-BY-SA](https://creativecommons.org/licenses/by-sa/3.0/). Other data is licensed according to the accompanying license files. ### Citation Information ``` @article{lewis2021paq, title={PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them}, author={Patrick Lewis and Yuxiang Wu and Linqing Liu and Pasquale Minervini and Heinrich Küttler and Aleksandra Piktus and Pontus Stenetorp and Sebastian Riedel}, year={2021}, eprint={2102.07033}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@patrick-s-h-lewis](https://github.com/patrick-s-h-lewis) for adding this dataset.
embedding-data/PAQ_pairs
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "arxiv:2102.07033", "region:us" ]
2022-07-08T16:05:27+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/PAQ_pairs", "pretty_name": "PAQ_pairs"}
2022-08-02T01:58:28+00:00
[ "2102.07033" ]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-2102.07033 #region-us
# Dataset Card for "PAQ_pairs" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: - Size of downloaded dataset files: - Size of the generated dataset: - Total amount of disk used: 21 Bytes ### Dataset Summary Pairs questions and answers obtained from Wikipedia. Disclaimer: The team releasing PAQ QA pairs did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". The first sentence is a question and the second an answer; thus, both sentences would be similar. This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format: Review an example 'i' with: ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The PAQ QA-pairs and metadata is licensed under CC-BY-SA. Other data is licensed according to the accompanying license files. ### Contributions Thanks to @patrick-s-h-lewis for adding this dataset.
[ "# Dataset Card for \"PAQ_pairs\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: \n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 21 Bytes", "### Dataset Summary\n\nPairs questions and answers obtained from Wikipedia.\n\nDisclaimer: The team releasing PAQ QA pairs did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\". The first sentence is a question and the second an answer; thus, both sentences would be similar.\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe PAQ QA-pairs and metadata is licensed under CC-BY-SA. \nOther data is licensed according to the accompanying license files.", "### Contributions\n\nThanks to @patrick-s-h-lewis for adding this dataset." ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-2102.07033 #region-us \n", "# Dataset Card for \"PAQ_pairs\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: \n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 21 Bytes", "### Dataset Summary\n\nPairs questions and answers obtained from Wikipedia.\n\nDisclaimer: The team releasing PAQ QA pairs did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains pairs of sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\". The first sentence is a question and the second an answer; thus, both sentences would be similar.\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.", "### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe PAQ QA-pairs and metadata is licensed under CC-BY-SA. \nOther data is licensed according to the accompanying license files.", "### Contributions\n\nThanks to @patrick-s-h-lewis for adding this dataset." ]
[ 51, 12, 120, 49, 59, 24, 7, 100, 59, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 41, 23 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #arxiv-2102.07033 #region-us \n# Dataset Card for \"PAQ_pairs\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: \n- Size of downloaded dataset files: \n- Size of the generated dataset: \n- Total amount of disk used: 21 Bytes### Dataset Summary\n\nPairs questions and answers obtained from Wikipedia.\n\nDisclaimer: The team releasing PAQ QA pairs did not upload the dataset to the Hub and did not write a dataset card. \nThese steps were done by the Hugging Face team.### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n- English.## Dataset Structure\nEach example in the dataset contains pairs of sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\". The first sentence is a question and the second an answer; thus, both sentences would be similar.\n\n\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.### Usage Example\n\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format:\n\nReview an example 'i' with:### Data Instances### Data Fields### Data Splits## Dataset Creation" ]
30535094aa0d12c59a3b22a2ead8ff219886e502
# Dataset Card for "UnpredicTable-cluster-noise" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster-noise
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:15:02+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster-noise"}
2022-08-04T18:42:06+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster-noise" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster-noise\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster-noise\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 31, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster-noise\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
3787e6e7e6193f9c43a7300367018ddb86afcfcd
# Dataset Card for "UnpredicTable-cluster00" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster00
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:16:43+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster00"}
2022-08-04T18:42:43+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster00" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster00\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster00\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster00\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
d701ce691e69f806172b7897590a746c37fe7f6b
# Dataset Card for "UnpredicTable-cluster01" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster01
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:17:31+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster01"}
2022-08-04T18:43:16+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster01" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster01\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster01\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster01\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
1cb3e58f98f2cf2fd87de4d22b18e05d089c8e6f
# Dataset Card for "UnpredicTable-cluster10" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster10
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:18:25+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster10"}
2022-08-04T18:49:37+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster10" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster10\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster10\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster10\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
af43492d1ea94b680c1f81f670b779d3e217a04d
# Dataset Card for "UnpredicTable-cluster11" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster11
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:19:16+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster11"}
2022-08-04T18:50:50+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster11" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster11\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster11\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster11\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
369a265f5c1a4a1542cd81ffe6ddc2e0b6f15233
# Dataset Card for "UnpredicTable-cluster12" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster12
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:20:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster12"}
2022-08-04T18:52:07+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster12" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster12\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster12\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster12\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
b33edbf919ceed2ed651b3be2dc6e4ad9c59d068
# Dataset Card for "UnpredicTable-cluster13" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster13
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:23:44+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster13"}
2022-08-04T18:52:42+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster13" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster13\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster13\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster13\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
c1616a0c82c97e36cf0462181212b7c6d83dc299
# Dataset Card for "UnpredicTable-cluster14" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster14
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:29:56+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster14"}
2022-08-04T18:53:18+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster14" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster14\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster14\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster14\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
7ca1cbebc1e2d2833b87e5cca0d749a9d20a96c6
# Dataset Card for "UnpredicTable-cluster15" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster15
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:31:11+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster15"}
2022-08-04T18:54:04+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster15" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster15\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster15\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster15\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
f5017ca57a997e3ad47c3f28cb4d93fbd4757162
# Dataset Card for "UnpredicTable-cluster16" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster16
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:32:41+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster16"}
2022-08-04T18:54:44+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster16" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster16\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster16\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster16\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
99ad12bd83136c04465691c15577aecc36e1d749
# Dataset Card for "UnpredicTable-cluster17" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster17
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:33:42+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster17"}
2022-08-04T18:55:23+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster17" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster17\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster17\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster17\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
4f8c12a5c28317fceef20e232b0935db46653a70
# Dataset Card for "UnpredicTable-cluster18" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster18
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T16:51:30+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster18"}
2022-08-04T18:55:58+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster18" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster18\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster18\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster18\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
86f036b36f175df660bddde29aa0133c2061d716
# Dataset Card for "UnpredicTable-cluster19" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster19
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:23:12+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster19"}
2022-08-04T18:56:35+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster19" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster19\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster19\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster19\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
a315107ed65af8cb8ab60533864ab7a6fc7a9ae2
# Dataset Card for "UnpredicTable-cluster02" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster02
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:25:06+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster02"}
2022-08-04T18:44:14+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster02" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster02\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster02\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster02\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
7effaf03ef072593a70bbdf01af57bd13f67a922
# Dataset Card for "UnpredicTable-cluster20" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster20
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:26:06+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster20"}
2022-08-04T18:57:20+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster20" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster20\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster20\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster20\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
eaad3e8841939fa942e7bfbb4fa360a41e8e3308
# Dataset Card for "UnpredicTable-cluster21" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster21
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:27:34+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster21"}
2022-08-04T18:57:54+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster21" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster21\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster21\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster21\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
005580775e33343198573fcbf0969a7075ee0538
# Dataset Card for "UnpredicTable-cluster22" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster22
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:28:51+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster22"}
2022-08-04T18:58:29+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster22" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster22\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster22\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster22\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
3fe6fd91090b9f972a7dc39c078844f590c4aa99
# Dataset Card for "UnpredicTable-cluster23" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster23
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:29:41+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster23"}
2022-08-04T18:58:59+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster23" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster23\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster23\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster23\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
49a80015e70fd06f5f41cafb01df7cb5bf6b97b9
# Dataset Card for "UnpredicTable-cluster24" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster24
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:33:36+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster24"}
2022-08-04T18:59:33+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster24" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster24\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster24\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster24\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
a5209ce6aa2cb008f6c549d887e61bdce70cef27
# Dataset Card for "UnpredicTable-cluster25" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster25
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:35:02+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster25"}
2022-08-04T19:00:11+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster25" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster25\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster25\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster25\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
ab43fcc4730e40ea5247936a8e9e5d78eac55c90
# Dataset Card for "UnpredicTable-cluster26" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster26
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:38:15+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster26"}
2022-08-04T19:00:43+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster26" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster26\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster26\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster26\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
1b5312f1f3cf23ddbed7bc183e551ca0cc086400
# Dataset Card for "UnpredicTable-cluster27" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster27
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:40:03+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster27"}
2022-08-04T19:01:16+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster27" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster27\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster27\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster27\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
1d1636c5d825b61150c7bf286b8ada890e126c63
# Dataset Card for "UnpredicTable-cluster28" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster28
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T17:58:32+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster28"}
2022-08-04T19:01:54+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster28" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster28\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster28\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster28\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
326bc07a2b864fc26f94b6c610a5348ad248ea87
# Dataset Card for "UnpredicTable-cluster29" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster29
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:06:50+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster29"}
2022-08-04T19:02:57+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster29" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster29\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster29\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster29\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
253df5629a8ac1653c3b7b2fa5f6aec67a15d77b
# Dataset Card for "UnpredicTable-cluster03" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster03
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:08:05+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster03"}
2022-08-04T18:44:47+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster03" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster03\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster03\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster03\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
b745f72350bad6f06cefda65de7413e3b3d5245a
# Dataset Card for "UnpredicTable-cluster04" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster04
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:09:09+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster04"}
2022-08-04T18:45:22+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster04" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster04\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster04\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster04\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
41273c850a3db6dbbaa24e62f1781a29082933bc
# Dataset Card for "UnpredicTable-cluster05" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster05
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:10:16+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster05"}
2022-08-04T18:45:58+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster05" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster05\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster05\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster05\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
dc851308a705e625ef0aa18db4e27271630bae0a
# Dataset Card for "UnpredicTable-cluster06" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster06
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:11:07+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster06"}
2022-08-04T18:46:44+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster06" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster06\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster06\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster06\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
9d24515103446c09480e8da07eba58407cf04628
# Dataset Card for "UnpredicTable-cluster07" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster07
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:13:12+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster07"}
2022-08-04T18:47:24+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster07" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster07\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster07\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster07\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
176963045cff4156649ffe5e52ea0c4a1480c240
# Dataset Card for "UnpredicTable-cluster08" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster08
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:14:10+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster08"}
2022-08-04T18:48:00+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster08" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster08\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster08\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster08\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
dadc5a03b684674e151c3007663e7f09ce6bf968
# Dataset Card for "UnpredicTable-cluster09" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_cluster09
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-08T18:15:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-cluster09"}
2022-08-04T18:48:52+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-cluster09" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-cluster09\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-cluster09\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 29, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-cluster09\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
38f9e34cc1a66302e7dfd4e01dc228eafbf4dbc1
## Student Scores Dataset This dataset contains clean and original versions of Student Scores Dataset and the transformer used to transform it from original to clean, can be used for inferences. Here's the plot of the transformer: <style>#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 {color: black;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 pre{padding: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-toggleable {background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-estimator:hover {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-item {z-index: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-parallel-item:only-child::after {width: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 div.sk-text-repr-fallback {display: none;}</style><div id="sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c04042d6-1013-4e6e-97d5-80229d8d887c" type="checkbox" ><label for="c04042d6-1013-4e6e-97d5-80229d8d887c" class="sk-toggleable__label sk-toggleable__label-arrow">ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="09be6c7a-7620-4240-ae3e-fea9b9c4ba96" type="checkbox" ><label for="09be6c7a-7620-4240-ae3e-fea9b9c4ba96" class="sk-toggleable__label sk-toggleable__label-arrow">categorical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[0, 1, 2, 3, 4]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="26c15d8d-4a1f-4c4d-b0de-5385845dad87" type="checkbox" ><label for="26c15d8d-4a1f-4c4d-b0de-5385845dad87" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(fill_value=&#x27;missing&#x27;, strategy=&#x27;constant&#x27;)</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="240be745-c3b3-4b4a-825b-2d1fdb4098c4" type="checkbox" ><label for="240be745-c3b3-4b4a-825b-2d1fdb4098c4" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[5, 6, 7]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="27c7042f-3ced-4afc-ac3a-08b18ef36baa" type="checkbox" ><label for="27c7042f-3ced-4afc-ac3a-08b18ef36baa" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(strategy=&#x27;median&#x27;)</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="78993eb3-7988-4fb6-b8e2-c05be3457d30" type="checkbox" ><label for="78993eb3-7988-4fb6-b8e2-c05be3457d30" class="sk-toggleable__label sk-toggleable__label-arrow">school_encoder</label><div class="sk-toggleable__content"><pre>[2]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="bc1fd86e-4a3b-4448-85d5-15961983cfa2" type="checkbox" ><label for="bc1fd86e-4a3b-4448-85d5-15961983cfa2" class="sk-toggleable__label sk-toggleable__label-arrow">OrdinalEncoder</label><div class="sk-toggleable__content"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="56bbc2fd-309f-40fc-b160-45fc33cea93b" type="checkbox" ><label for="56bbc2fd-309f-40fc-b160-45fc33cea93b" class="sk-toggleable__label sk-toggleable__label-arrow">status_encoder</label><div class="sk-toggleable__content"><pre>[4]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="b80005c6-2fe9-4168-971f-8951bfa7f8f3" type="checkbox" ><label for="b80005c6-2fe9-4168-971f-8951bfa7f8f3" class="sk-toggleable__label sk-toggleable__label-arrow">OrdinalEncoder</label><div class="sk-toggleable__content"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="677cf14a-996a-48af-ba0e-e3d2e83021b8" type="checkbox" ><label for="677cf14a-996a-48af-ba0e-e3d2e83021b8" class="sk-toggleable__label sk-toggleable__label-arrow">gender_encoder</label><div class="sk-toggleable__content"><pre>[0]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="0cad3051-c4b7-41a8-a372-c439ae4ad98b" type="checkbox" ><label for="0cad3051-c4b7-41a8-a372-c439ae4ad98b" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="e5707a95-9465-439b-ae0b-34e122add191" type="checkbox" ><label for="e5707a95-9465-439b-ae0b-34e122add191" class="sk-toggleable__label sk-toggleable__label-arrow">remainder</label><div class="sk-toggleable__content"><pre>[]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="534f7a9b-d224-476c-993a-124b3435a8e3" type="checkbox" ><label for="534f7a9b-d224-476c-993a-124b3435a8e3" class="sk-toggleable__label sk-toggleable__label-arrow">passthrough</label><div class="sk-toggleable__content"><pre>passthrough</pre></div></div></div></div></div></div></div></div></div></div>
merve/student_scores
[ "region:us" ]
2022-07-08T23:02:42+00:00
{}
2022-07-08T23:02:48+00:00
[]
[]
TAGS #region-us
## Student Scores Dataset This dataset contains clean and original versions of Student Scores Dataset and the transformer used to transform it from original to clean, can be used for inferences. Here's the plot of the transformer: <style>#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 {color: black;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 pre{padding: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable {background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label-arrow:hover:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator:hover URL-toggleable__label-arrow:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__control:checked~URL-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__control:checked~URL-toggleable__label-arrow:before {content: "▾";}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator URL-toggleable__control:checked~URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label URL-toggleable__control:checked~URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator:hover {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label:hover URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-item {z-index: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:only-child::after {width: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label-container {position: relative;z-index: 2;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-container {/* jupyter's 'URL' sets '[hidden] { display: none; }' but URL set '[hidden] { display: none !important; }' so we also need the '!important' here to be able to override the default hidden behavior on the sphinx rendered URL. See: URL */display: inline-block !important;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-text-repr-fallback {display: none;}</style><div id="sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c04042d6-1013-4e6e-97d5-80229d8d887c" type="checkbox" ><label for="c04042d6-1013-4e6e-97d5-80229d8d887c" class="sk-toggleable__label sk-toggleable__label-arrow">ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="09be6c7a-7620-4240-ae3e-fea9b9c4ba96" type="checkbox" ><label for="09be6c7a-7620-4240-ae3e-fea9b9c4ba96" class="sk-toggleable__label sk-toggleable__label-arrow">categorical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[0, 1, 2, 3, 4]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="26c15d8d-4a1f-4c4d-b0de-5385845dad87" type="checkbox" ><label for="26c15d8d-4a1f-4c4d-b0de-5385845dad87" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(fill_value=&#x27;missing&#x27;, strategy=&#x27;constant&#x27;)</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="240be745-c3b3-4b4a-825b-2d1fdb4098c4" type="checkbox" ><label for="240be745-c3b3-4b4a-825b-2d1fdb4098c4" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[5, 6, 7]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="27c7042f-3ced-4afc-ac3a-08b18ef36baa" type="checkbox" ><label for="27c7042f-3ced-4afc-ac3a-08b18ef36baa" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(strategy=&#x27;median&#x27;)</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="78993eb3-7988-4fb6-b8e2-c05be3457d30" type="checkbox" ><label for="78993eb3-7988-4fb6-b8e2-c05be3457d30" class="sk-toggleable__label sk-toggleable__label-arrow">school_encoder</label><div class="sk-toggleable__content"><pre>[2]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="bc1fd86e-4a3b-4448-85d5-15961983cfa2" type="checkbox" ><label for="bc1fd86e-4a3b-4448-85d5-15961983cfa2" class="sk-toggleable__label sk-toggleable__label-arrow">OrdinalEncoder</label><div class="sk-toggleable__content"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="56bbc2fd-309f-40fc-b160-45fc33cea93b" type="checkbox" ><label for="56bbc2fd-309f-40fc-b160-45fc33cea93b" class="sk-toggleable__label sk-toggleable__label-arrow">status_encoder</label><div class="sk-toggleable__content"><pre>[4]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="b80005c6-2fe9-4168-971f-8951bfa7f8f3" type="checkbox" ><label for="b80005c6-2fe9-4168-971f-8951bfa7f8f3" class="sk-toggleable__label sk-toggleable__label-arrow">OrdinalEncoder</label><div class="sk-toggleable__content"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="677cf14a-996a-48af-ba0e-e3d2e83021b8" type="checkbox" ><label for="677cf14a-996a-48af-ba0e-e3d2e83021b8" class="sk-toggleable__label sk-toggleable__label-arrow">gender_encoder</label><div class="sk-toggleable__content"><pre>[0]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="0cad3051-c4b7-41a8-a372-c439ae4ad98b" type="checkbox" ><label for="0cad3051-c4b7-41a8-a372-c439ae4ad98b" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="e5707a95-9465-439b-ae0b-34e122add191" type="checkbox" ><label for="e5707a95-9465-439b-ae0b-34e122add191" class="sk-toggleable__label sk-toggleable__label-arrow">remainder</label><div class="sk-toggleable__content"><pre>[]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="534f7a9b-d224-476c-993a-124b3435a8e3" type="checkbox" ><label for="534f7a9b-d224-476c-993a-124b3435a8e3" class="sk-toggleable__label sk-toggleable__label-arrow">passthrough</label><div class="sk-toggleable__content"><pre>passthrough</pre></div></div></div></div></div></div></div></div></div></div>
[ "## Student Scores Dataset\n\nThis dataset contains clean and original versions of Student Scores Dataset and the transformer used to transform it from original to clean, can be used for inferences.\n\nHere's the plot of the transformer:\n\n<style>#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 {color: black;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 pre{padding: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable {background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label-arrow:before {content: \"▸\";float: left;margin-right: 0.25em;color: #696969;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label-arrow:hover:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator:hover URL-toggleable__label-arrow:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__control:checked~URL-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__control:checked~URL-toggleable__label-arrow:before {content: \"▾\";}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator URL-toggleable__control:checked~URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label URL-toggleable__control:checked~URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator:hover {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item::after {content: \"\";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label:hover URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-serial::before {content: \"\";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-item {z-index: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel::before {content: \"\";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:only-child::after {width: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label-container {position: relative;z-index: 2;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-container {/* jupyter's 'URL' sets '[hidden] { display: none; }' but URL set '[hidden] { display: none !important; }' so we also need the '!important' here to be able to override the default hidden behavior on the sphinx rendered URL. See: URL */display: inline-block !important;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-text-repr-fallback {display: none;}</style><div id=\"sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949\" class=\"sk-top-container\"><div class=\"sk-text-repr-fallback\"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class=\"sk-container\" hidden><div class=\"sk-item sk-dashed-wrapped\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"c04042d6-1013-4e6e-97d5-80229d8d887c\" type=\"checkbox\" ><label for=\"c04042d6-1013-4e6e-97d5-80229d8d887c\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">ColumnTransformer</label><div class=\"sk-toggleable__content\"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre></div></div></div><div class=\"sk-parallel\"><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"09be6c7a-7620-4240-ae3e-fea9b9c4ba96\" type=\"checkbox\" ><label for=\"09be6c7a-7620-4240-ae3e-fea9b9c4ba96\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">categorical_missing_value_imputer</label><div class=\"sk-toggleable__content\"><pre>[0, 1, 2, 3, 4]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"26c15d8d-4a1f-4c4d-b0de-5385845dad87\" type=\"checkbox\" ><label for=\"26c15d8d-4a1f-4c4d-b0de-5385845dad87\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">SimpleImputer</label><div class=\"sk-toggleable__content\"><pre>SimpleImputer(fill_value=&#x27;missing&#x27;, strategy=&#x27;constant&#x27;)</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"240be745-c3b3-4b4a-825b-2d1fdb4098c4\" type=\"checkbox\" ><label for=\"240be745-c3b3-4b4a-825b-2d1fdb4098c4\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">numerical_missing_value_imputer</label><div class=\"sk-toggleable__content\"><pre>[5, 6, 7]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"27c7042f-3ced-4afc-ac3a-08b18ef36baa\" type=\"checkbox\" ><label for=\"27c7042f-3ced-4afc-ac3a-08b18ef36baa\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">SimpleImputer</label><div class=\"sk-toggleable__content\"><pre>SimpleImputer(strategy=&#x27;median&#x27;)</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"78993eb3-7988-4fb6-b8e2-c05be3457d30\" type=\"checkbox\" ><label for=\"78993eb3-7988-4fb6-b8e2-c05be3457d30\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">school_encoder</label><div class=\"sk-toggleable__content\"><pre>[2]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"bc1fd86e-4a3b-4448-85d5-15961983cfa2\" type=\"checkbox\" ><label for=\"bc1fd86e-4a3b-4448-85d5-15961983cfa2\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">OrdinalEncoder</label><div class=\"sk-toggleable__content\"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"56bbc2fd-309f-40fc-b160-45fc33cea93b\" type=\"checkbox\" ><label for=\"56bbc2fd-309f-40fc-b160-45fc33cea93b\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">status_encoder</label><div class=\"sk-toggleable__content\"><pre>[4]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"b80005c6-2fe9-4168-971f-8951bfa7f8f3\" type=\"checkbox\" ><label for=\"b80005c6-2fe9-4168-971f-8951bfa7f8f3\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">OrdinalEncoder</label><div class=\"sk-toggleable__content\"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"677cf14a-996a-48af-ba0e-e3d2e83021b8\" type=\"checkbox\" ><label for=\"677cf14a-996a-48af-ba0e-e3d2e83021b8\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">gender_encoder</label><div class=\"sk-toggleable__content\"><pre>[0]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"0cad3051-c4b7-41a8-a372-c439ae4ad98b\" type=\"checkbox\" ><label for=\"0cad3051-c4b7-41a8-a372-c439ae4ad98b\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">OneHotEncoder</label><div class=\"sk-toggleable__content\"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"e5707a95-9465-439b-ae0b-34e122add191\" type=\"checkbox\" ><label for=\"e5707a95-9465-439b-ae0b-34e122add191\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">remainder</label><div class=\"sk-toggleable__content\"><pre>[]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"534f7a9b-d224-476c-993a-124b3435a8e3\" type=\"checkbox\" ><label for=\"534f7a9b-d224-476c-993a-124b3435a8e3\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">passthrough</label><div class=\"sk-toggleable__content\"><pre>passthrough</pre></div></div></div></div></div></div></div></div></div></div>" ]
[ "TAGS\n#region-us \n", "## Student Scores Dataset\n\nThis dataset contains clean and original versions of Student Scores Dataset and the transformer used to transform it from original to clean, can be used for inferences.\n\nHere's the plot of the transformer:\n\n<style>#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 {color: black;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 pre{padding: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable {background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label-arrow:before {content: \"▸\";float: left;margin-right: 0.25em;color: #696969;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__label-arrow:hover:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator:hover URL-toggleable__label-arrow:before {color: black;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__control:checked~URL-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-toggleable__control:checked~URL-toggleable__label-arrow:before {content: \"▾\";}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator URL-toggleable__control:checked~URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label URL-toggleable__control:checked~URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-estimator:hover {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item::after {content: \"\";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label:hover URL-toggleable__label {background-color: #d4ebff;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-serial::before {content: \"\";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-item {z-index: 1;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel::before {content: \"\";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-parallel-item:only-child::after {width: 0;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-label-container {position: relative;z-index: 2;text-align: center;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-container {/* jupyter's 'URL' sets '[hidden] { display: none; }' but URL set '[hidden] { display: none !important; }' so we also need the '!important' here to be able to override the default hidden behavior on the sphinx rendered URL. See: URL */display: inline-block !important;position: relative;}#sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949 URL-text-repr-fallback {display: none;}</style><div id=\"sk-46a90950-7a65-4bd5-81b7-b0c3bf7aa949\" class=\"sk-top-container\"><div class=\"sk-text-repr-fallback\"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class=\"sk-container\" hidden><div class=\"sk-item sk-dashed-wrapped\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"c04042d6-1013-4e6e-97d5-80229d8d887c\" type=\"checkbox\" ><label for=\"c04042d6-1013-4e6e-97d5-80229d8d887c\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">ColumnTransformer</label><div class=\"sk-toggleable__content\"><pre>ColumnTransformer(remainder=&#x27;passthrough&#x27;,transformers=[(&#x27;categorical_missing_value_imputer&#x27;,SimpleImputer(fill_value=&#x27;missing&#x27;,strategy=&#x27;constant&#x27;),[0, 1, 2, 3, 4]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(strategy=&#x27;median&#x27;), [5, 6, 7]),(&#x27;school_encoder&#x27;, OrdinalEncoder(), [2]),(&#x27;status_encoder&#x27;, OrdinalEncoder(), [4]),(&#x27;gender_encoder&#x27;, OneHotEncoder(), [0])])</pre></div></div></div><div class=\"sk-parallel\"><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"09be6c7a-7620-4240-ae3e-fea9b9c4ba96\" type=\"checkbox\" ><label for=\"09be6c7a-7620-4240-ae3e-fea9b9c4ba96\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">categorical_missing_value_imputer</label><div class=\"sk-toggleable__content\"><pre>[0, 1, 2, 3, 4]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"26c15d8d-4a1f-4c4d-b0de-5385845dad87\" type=\"checkbox\" ><label for=\"26c15d8d-4a1f-4c4d-b0de-5385845dad87\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">SimpleImputer</label><div class=\"sk-toggleable__content\"><pre>SimpleImputer(fill_value=&#x27;missing&#x27;, strategy=&#x27;constant&#x27;)</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"240be745-c3b3-4b4a-825b-2d1fdb4098c4\" type=\"checkbox\" ><label for=\"240be745-c3b3-4b4a-825b-2d1fdb4098c4\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">numerical_missing_value_imputer</label><div class=\"sk-toggleable__content\"><pre>[5, 6, 7]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"27c7042f-3ced-4afc-ac3a-08b18ef36baa\" type=\"checkbox\" ><label for=\"27c7042f-3ced-4afc-ac3a-08b18ef36baa\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">SimpleImputer</label><div class=\"sk-toggleable__content\"><pre>SimpleImputer(strategy=&#x27;median&#x27;)</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"78993eb3-7988-4fb6-b8e2-c05be3457d30\" type=\"checkbox\" ><label for=\"78993eb3-7988-4fb6-b8e2-c05be3457d30\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">school_encoder</label><div class=\"sk-toggleable__content\"><pre>[2]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"bc1fd86e-4a3b-4448-85d5-15961983cfa2\" type=\"checkbox\" ><label for=\"bc1fd86e-4a3b-4448-85d5-15961983cfa2\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">OrdinalEncoder</label><div class=\"sk-toggleable__content\"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"56bbc2fd-309f-40fc-b160-45fc33cea93b\" type=\"checkbox\" ><label for=\"56bbc2fd-309f-40fc-b160-45fc33cea93b\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">status_encoder</label><div class=\"sk-toggleable__content\"><pre>[4]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"b80005c6-2fe9-4168-971f-8951bfa7f8f3\" type=\"checkbox\" ><label for=\"b80005c6-2fe9-4168-971f-8951bfa7f8f3\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">OrdinalEncoder</label><div class=\"sk-toggleable__content\"><pre>OrdinalEncoder()</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"677cf14a-996a-48af-ba0e-e3d2e83021b8\" type=\"checkbox\" ><label for=\"677cf14a-996a-48af-ba0e-e3d2e83021b8\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">gender_encoder</label><div class=\"sk-toggleable__content\"><pre>[0]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"0cad3051-c4b7-41a8-a372-c439ae4ad98b\" type=\"checkbox\" ><label for=\"0cad3051-c4b7-41a8-a372-c439ae4ad98b\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">OneHotEncoder</label><div class=\"sk-toggleable__content\"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class=\"sk-parallel-item\"><div class=\"sk-item\"><div class=\"sk-label-container\"><div class=\"sk-label sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"e5707a95-9465-439b-ae0b-34e122add191\" type=\"checkbox\" ><label for=\"e5707a95-9465-439b-ae0b-34e122add191\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">remainder</label><div class=\"sk-toggleable__content\"><pre>[]</pre></div></div></div><div class=\"sk-serial\"><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"534f7a9b-d224-476c-993a-124b3435a8e3\" type=\"checkbox\" ><label for=\"534f7a9b-d224-476c-993a-124b3435a8e3\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">passthrough</label><div class=\"sk-toggleable__content\"><pre>passthrough</pre></div></div></div></div></div></div></div></div></div></div>" ]
[ 6, 4951 ]
[ "passage: TAGS\n#region-us \n" ]
aa3d54a99f6314a888c3db3c67e6b27650913a9d
# Dataset Card for "WikiAnswers" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/afader/oqa#wikianswers-corpus](https://github.com/afader/oqa#wikianswers-corpus) - **Repository:** [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) - **Paper:** [More Information Needed](https://doi.org/10.1145/2623330.2623677) - **Point of Contact:** [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129) ### Dataset Summary The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2, ..., sentence_25]} {"set": [sentence_1, sentence_2, ..., sentence_25]} ... {"set": [sentence_1, sentence_2, ..., sentence_25]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/WikiAnswers") ``` The dataset is loaded as a `DatasetDict` and has the format for `N` examples: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: N }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) #### Who are the source language producers? [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Annotations #### Annotation process [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) #### Who are the annotators? [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Personal and Sensitive Information [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Discussion of Biases [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Other Known Limitations [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Licensing Information [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus) ### Citation Information ``` @inproceedings{Fader14, author = {Anthony Fader and Luke Zettlemoyer and Oren Etzioni}, title = {{Open Question Answering Over Curated and Extracted Knowledge Bases}}, booktitle = {KDD}, year = {2014} } ``` ### Contributions
embedding-data/WikiAnswers
[ "task_categories:sentence-similarity", "task_ids:semantic-similarity-classification", "language:en", "license:mit", "region:us" ]
2022-07-08T23:13:25+00:00
{"language": ["en"], "license": "mit", "task_categories": ["sentence-similarity", "paraphrase-mining"], "task_ids": ["semantic-similarity-classification"], "paperswithcode_id": "embedding-data/WikiAnswers", "pretty_name": "WikiAnswers"}
2022-08-02T02:33:01+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us
# Dataset Card for "WikiAnswers" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: Anthony Fader, Luke Zettlemoyer, Oren Etzioni ### Dataset Summary The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer. ### Supported Tasks - Sentence Transformers training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the Datasets library with 'pip install datasets' and load the dataset from the Hub with: The dataset is loaded as a 'DatasetDict' and has the format for 'N' examples: Review an example 'i' with: ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for \"WikiAnswers\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Anthony Fader, Luke Zettlemoyer, Oren Etzioni", "### Dataset Summary\nThe WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. \nEach cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format for 'N' examples:\n\nReview an example 'i' with:", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n", "# Dataset Card for \"WikiAnswers\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Anthony Fader, Luke Zettlemoyer, Oren Etzioni", "### Dataset Summary\nThe WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. \nEach cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.", "### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.", "### Languages\n- English.", "## Dataset Structure\nEach example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.", "### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format for 'N' examples:\n\nReview an example 'i' with:", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 43, 11, 120, 35, 81, 24, 7, 74, 65, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #task_ids-semantic-similarity-classification #language-English #license-mit #region-us \n# Dataset Card for \"WikiAnswers\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact: Anthony Fader, Luke Zettlemoyer, Oren Etzioni### Dataset Summary\nThe WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. \nEach cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.### Supported Tasks\n- Sentence Transformers training; useful for semantic search and sentence similarity.### Languages\n- English.## Dataset Structure\nEach example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key \"set\" and a list with the sentences as \"value\".\n\nThis dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.### Usage Example\nInstall the Datasets library with 'pip install datasets' and load the dataset from the Hub with:\n\nThe dataset is loaded as a 'DatasetDict' and has the format for 'N' examples:\n\nReview an example 'i' with:### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization" ]
54c7e700ad81e76204a401dabcb99d0995c24a47
测试数据集
changxin/test_pq
[ "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ch", "license:afl-3.0", "region:us" ]
2022-07-09T04:51:24+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ch"], "license": "afl-3.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["other-test"], "task_ids": ["other-test"], "paperswithcode_id": "ix", "pretty_name": "Test Dataset", "type": "test"}
2022-07-09T06:16:25+00:00
[]
[ "ch" ]
TAGS #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Chamorro #license-afl-3.0 #region-us
测试数据集
[]
[ "TAGS\n#annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Chamorro #license-afl-3.0 #region-us \n" ]
[ 67 ]
[ "passage: TAGS\n#annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Chamorro #license-afl-3.0 #region-us \n" ]
5be4ed72cb4b36286ea12103b29ba690fa5102b7
# Dataset Card for "UnpredicTable-rated-low" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_rated-low
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-09T07:47:52+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-rated-low"}
2022-08-04T19:12:07+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-rated-low" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-rated-low\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-rated-low\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 30, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-rated-low\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
7711c1ba72d06d6a47b4359d657abcd3b6ab2fdb
# Dataset Card for "UnpredicTable-rated-medium" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_rated-medium
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-09T07:53:46+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-rated-medium"}
2022-08-04T19:12:40+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-rated-medium" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-rated-medium\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-rated-medium\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 31, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-rated-medium\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]
d28f159164bbf1a19e0ecf09d9f2454c2e66a219
# Dataset Card for "UnpredicTable-rated-high" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ethanperez.net/unpredictable - **Repository:** https://github.com/JunShern/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data - **Point of Contact:** [email protected], [email protected] ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low) * [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium) * [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com) * [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net) * [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com) * [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com) * [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com) * [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com) * [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org) * [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org) * [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com) * [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com) * [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com) * [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com) * [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com) * [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com) * [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com) * [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com) * [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com) * [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org) * [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org) * [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org) * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00) * [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01) * [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02) * [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03) * [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04) * [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05) * [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06) * [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07) * [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08) * [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09) * [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10) * [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11) * [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12) * [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13) * [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14) * [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15) * [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16) * [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17) * [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18) * [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19) * [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20) * [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21) * [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22) * [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23) * [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24) * [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25) * [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26) * [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27) * [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28) * [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29) * [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Annotations #### Annotation process Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low), [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0 ### Citation Information ``` @misc{chan2022few, author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan}, title = {Few-shot Adaptation Works with UnpredicTable Data}, publisher={arXiv}, year = {2022}, url = {https://arxiv.org/abs/2208.01009} } ```
MicPie/unpredictable_rated-high
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "arxiv:2208.01009", "region:us" ]
2022-07-09T07:56:24+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-rated-high"}
2022-08-04T19:11:37+00:00
[ "2208.01009" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us
# Dataset Card for "UnpredicTable-rated-high" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data - Point of Contact: junshern@URL, perez@URL ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings): * UnpredicTable-rated-low * UnpredicTable-rated-medium * UnpredicTable-rated-high * UnpredicTable data subsets based on the website of origin: * UnpredicTable-baseball-fantasysports-yahoo-com * UnpredicTable-bulbapedia-bulbagarden-net * UnpredicTable-cappex-com * UnpredicTable-cram-com * UnpredicTable-dividend-com * UnpredicTable-dummies-com * UnpredicTable-en-wikipedia-org * UnpredicTable-ensembl-org * UnpredicTable-gamefaqs-com * UnpredicTable-mgoblog-com * UnpredicTable-mmo-champion-com * UnpredicTable-msdn-microsoft-com * UnpredicTable-phonearena-com * UnpredicTable-sittercity-com * UnpredicTable-sporcle-com * UnpredicTable-studystack-com * UnpredicTable-support-google-com * UnpredicTable-w3-org * UnpredicTable-wiki-openmoko-org * UnpredicTable-wkdu-org * UnpredicTable data subsets based on clustering (for the clustering details please see our publication): * UnpredicTable-cluster00 * UnpredicTable-cluster01 * UnpredicTable-cluster02 * UnpredicTable-cluster03 * UnpredicTable-cluster04 * UnpredicTable-cluster05 * UnpredicTable-cluster06 * UnpredicTable-cluster07 * UnpredicTable-cluster08 * UnpredicTable-cluster09 * UnpredicTable-cluster10 * UnpredicTable-cluster11 * UnpredicTable-cluster12 * UnpredicTable-cluster13 * UnpredicTable-cluster14 * UnpredicTable-cluster15 * UnpredicTable-cluster16 * UnpredicTable-cluster17 * UnpredicTable-cluster18 * UnpredicTable-cluster19 * UnpredicTable-cluster20 * UnpredicTable-cluster21 * UnpredicTable-cluster22 * UnpredicTable-cluster23 * UnpredicTable-cluster24 * UnpredicTable-cluster25 * UnpredicTable-cluster26 * UnpredicTable-cluster27 * UnpredicTable-cluster28 * UnpredicTable-cluster29 * UnpredicTable-cluster-noise ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Annotations #### Annotation process Manual annotation was only carried out for the UnpredicTable-rated-low, UnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication. #### Who are the annotators? Annotations were carried out by a lab assistant. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Dataset Curators Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-rated-high\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n", "# Dataset Card for \"UnpredicTable-rated-high\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Annotations", "#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.", "#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Dataset Curators\nJun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez", "### Licensing Information\nApache 2.0" ]
[ 324, 30, 112, 47, 866, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 5, 72, 20, 130, 8, 97, 112, 13, 5, 30, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #arxiv-2208.01009 #region-us \n# Dataset Card for \"UnpredicTable-rated-high\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "passage: ## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data\n- Point of Contact: junshern@URL, perez@URL", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):\n * UnpredicTable-rated-low\n * UnpredicTable-rated-medium\n * UnpredicTable-rated-high\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-baseball-fantasysports-yahoo-com \n * UnpredicTable-bulbapedia-bulbagarden-net\n * UnpredicTable-cappex-com \n * UnpredicTable-cram-com\n * UnpredicTable-dividend-com \n * UnpredicTable-dummies-com\n * UnpredicTable-en-wikipedia-org\n * UnpredicTable-ensembl-org\n * UnpredicTable-gamefaqs-com\n * UnpredicTable-mgoblog-com\n * UnpredicTable-mmo-champion-com\n * UnpredicTable-msdn-microsoft-com\n * UnpredicTable-phonearena-com\n * UnpredicTable-sittercity-com\n * UnpredicTable-sporcle-com\n * UnpredicTable-studystack-com\n * UnpredicTable-support-google-com\n * UnpredicTable-w3-org\n * UnpredicTable-wiki-openmoko-org\n * UnpredicTable-wkdu-org\n\n\n* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):\n * UnpredicTable-cluster00\n * UnpredicTable-cluster01\n * UnpredicTable-cluster02\n * UnpredicTable-cluster03\n * UnpredicTable-cluster04\n * UnpredicTable-cluster05\n * UnpredicTable-cluster06\n * UnpredicTable-cluster07\n * UnpredicTable-cluster08\n * UnpredicTable-cluster09\n * UnpredicTable-cluster10\n * UnpredicTable-cluster11\n * UnpredicTable-cluster12\n * UnpredicTable-cluster13\n * UnpredicTable-cluster14\n * UnpredicTable-cluster15\n * UnpredicTable-cluster16\n * UnpredicTable-cluster17\n * UnpredicTable-cluster18\n * UnpredicTable-cluster19\n * UnpredicTable-cluster20\n * UnpredicTable-cluster21\n * UnpredicTable-cluster22\n * UnpredicTable-cluster23\n * UnpredicTable-cluster24\n * UnpredicTable-cluster25\n * UnpredicTable-cluster26\n * UnpredicTable-cluster27\n * UnpredicTable-cluster28\n * UnpredicTable-cluster29\n * UnpredicTable-cluster-noise### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Annotations#### Annotation process\n\nManual annotation was only carried out for the UnpredicTable-rated-low,\nUnpredicTable-rated-medium, and UnpredicTable-rated-high data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.#### Who are the annotators?\n\nAnnotations were carried out by a lab assistant." ]