sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
9660f10339cfc46a369a4d95fccb301a739c3fa8
# Dataset Card for Flores200 ## Table of Contents - [Dataset Card for Flores200](#dataset-card-for-flores200) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Home:** [Flores](https://github.com/facebookresearch/flores) - **Repository:** [Github](https://github.com/facebookresearch/flores) ### Dataset Summary FLORES is a benchmark dataset for machine translation between English and low-resource languages. >The creation of FLORES200 doubles the existing language coverage of FLORES-101. Given the nature of the new languages, which have less standardization and require more specialized professional translations, the verification process became more complex. This required modifications to the translation workflow. FLORES-200 has several languages which were not translated from English. Specifically, several languages were translated from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also includes two script alternatives for four languages. FLORES-200 consists of translations from 842 distinct web articles, totaling 3001 sentences. These sentences are divided into three splits: dev, devtest, and test (hidden). On average, sentences are approximately 21 words long. **Disclaimer**: *The Flores200 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this. ### Languages The dataset contains parallel sentences for 200 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) plus an additional code describing the script (e.g., "eng_Latn", "ukr_Cyrl"). See [the webpage for code descriptions](https://github.com/facebookresearch/flores/blob/main/flores200/README.md). Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command. Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-ukr_Cyrl" will provide sentences in the format below). ## Dataset Structure ### Data Instances A sample from the `dev` split for the Ukrainian language (`ukr_Cyrl` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits. ```python { 'id': 1, 'sentence': 'У понеділок, науковці зі Школи медицини Стенфордського університету оголосили про винайдення нового діагностичного інструменту, що може сортувати клітини за їх видами: це малесенький друкований чіп, який можна виготовити за допомогою стандартних променевих принтерів десь по одному центу США за штуку.', 'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet', 'domain': 'wikinews', 'topic': 'health', 'has_image': 0, 'has_hyperlink': 0 } ``` When using a hyphenated pairing or using the `all` function, data will be presented as follows: ```python { 'id': 1, 'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet', 'domain': 'wikinews', 'topic': 'health', 'has_image': 0, 'has_hyperlink': 0, 'sentence_eng_Latn': 'On Monday, scientists from the Stanford University School of Medicine announced the invention of a new diagnostic tool that can sort cells by type: a tiny printable chip that can be manufactured using standard inkjet printers for possibly about one U.S. cent each.', 'sentence_ukr_Cyrl': 'У понеділок, науковці зі Школи медицини Стенфордського університету оголосили про винайдення нового діагностичного інструменту, що може сортувати клітини за їх видами: це малесенький друкований чіп, який можна виготовити за допомогою стандартних променевих принтерів десь по одному центу США за штуку.' } ``` The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields - `id`: Row number for the data entry, starting at 1. - `sentence`: The full sentence in the specific language (may have _lang for pairings) - `URL`: The URL for the English article from which the sentence was extracted. - `domain`: The domain of the sentence. - `topic`: The topic of the sentence. - `has_image`: Whether the original article contains an image. - `has_hyperlink`: Whether the sentence contains a hyperlink. ### Data Splits | config| `dev`| `devtest`| |-----------------:|-----:|---------:| |all configurations| 997| 1012:| ### Dataset Creation Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation. ## Additional Information ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @article{nllb2022, author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang}, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, year = {2022} } ```
Muennighoff/flores200
[ "task_categories:text2text-generation", "task_categories:translation", "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|flores", "language:ace", "language:acm", "language:acq", "language:aeb", "language:afr", "language:ajp", "language:aka", "language:als", "language:amh", "language:apc", "language:arb", "language:ars", "language:ary", "language:arz", "language:asm", "language:ast", "language:awa", "language:ayr", "language:azb", "language:azj", "language:bak", "language:bam", "language:ban", "language:bel", "language:bem", "language:ben", "language:bho", "language:bjn", "language:bod", "language:bos", "language:bug", "language:bul", "language:cat", "language:ceb", "language:ces", "language:cjk", "language:ckb", "language:crh", "language:cym", "language:dan", "language:deu", "language:dik", "language:dyu", "language:dzo", "language:ell", "language:eng", "language:epo", "language:est", "language:eus", "language:ewe", "language:fao", "language:fij", "language:fin", "language:fon", "language:fra", "language:fur", "language:fuv", "language:gaz", "language:gla", "language:gle", "language:glg", "language:grn", "language:guj", "language:hat", "language:hau", "language:heb", "language:hin", "language:hne", "language:hrv", "language:hun", "language:hye", "language:ibo", "language:ilo", "language:ind", "language:isl", "language:ita", "language:jav", "language:jpn", "language:kab", "language:kac", "language:kam", "language:kan", "language:kas", "language:kat", "language:kaz", "language:kbp", "language:kea", "language:khk", "language:khm", "language:kik", "language:kin", "language:kir", "language:kmb", "language:kmr", "language:knc", "language:kon", "language:kor", "language:lao", "language:lij", "language:lim", "language:lin", "language:lit", "language:lmo", "language:ltg", "language:ltz", "language:lua", "language:lug", "language:luo", "language:lus", "language:lvs", "language:mag", "language:mai", "language:mal", "language:mar", "language:min", "language:mkd", "language:mlt", "language:mni", "language:mos", "language:mri", "language:mya", "language:nld", "language:nno", "language:nob", "language:npi", "language:nso", "language:nus", "language:nya", "language:oci", "language:ory", "language:pag", "language:pan", "language:pap", "language:pbt", "language:pes", "language:plt", "language:pol", "language:por", "language:prs", "language:quy", "language:ron", "language:run", "language:rus", "language:sag", "language:san", "language:sat", "language:scn", "language:shn", "language:sin", "language:slk", "language:slv", "language:smo", "language:sna", "language:snd", "language:som", "language:sot", "language:spa", "language:srd", "language:srp", "language:ssw", "language:sun", "language:swe", "language:swh", "language:szl", "language:tam", "language:taq", "language:tat", "language:tel", "language:tgk", "language:tgl", "language:tha", "language:tir", "language:tpi", "language:tsn", "language:tso", "language:tuk", "language:tum", "language:tur", "language:twi", "language:tzm", "language:uig", "language:ukr", "language:umb", "language:urd", "language:uzn", "language:vec", "language:vie", "language:war", "language:wol", "language:xho", "language:ydd", "language:yor", "language:yue", "language:zho", "language:zsm", "language:zul", "license:cc-by-sa-4.0", "conditional-text-generation", "arxiv:2207.04672", "region:us" ]
2022-07-17T07:11:54+00:00
{"annotations_creators": ["found"], "language_creators": ["expert-generated"], "language": ["ace", "acm", "acq", "aeb", "afr", "ajp", "aka", "als", "amh", "apc", "arb", "ars", "ary", "arz", "asm", "ast", "awa", "ayr", "azb", "azj", "bak", "bam", "ban", "bel", "bem", "ben", "bho", "bjn", "bod", "bos", "bug", "bul", "cat", "ceb", "ces", "cjk", "ckb", "crh", "cym", "dan", "deu", "dik", "dyu", "dzo", "ell", "eng", "epo", "est", "eus", "ewe", "fao", "fij", "fin", "fon", "fra", "fur", "fuv", "gaz", "gla", "gle", "glg", "grn", "guj", "hat", "hau", "heb", "hin", "hne", "hrv", "hun", "hye", "ibo", "ilo", "ind", "isl", "ita", "jav", "jpn", "kab", "kac", "kam", "kan", "kas", "kat", "kaz", "kbp", "kea", "khk", "khm", "kik", "kin", "kir", "kmb", "kmr", "knc", "kon", "kor", "lao", "lij", "lim", "lin", "lit", "lmo", "ltg", "ltz", "lua", "lug", "luo", "lus", "lvs", "mag", "mai", "mal", "mar", "min", "mkd", "mlt", "mni", "mos", "mri", "mya", "nld", "nno", "nob", "npi", "nso", "nus", "nya", "oci", "ory", "pag", "pan", "pap", "pbt", "pes", "plt", "pol", "por", "prs", "quy", "ron", "run", "rus", "sag", "san", "sat", "scn", "shn", "sin", "slk", "slv", "smo", "sna", "snd", "som", "sot", "spa", "srd", "srp", "ssw", "sun", "swe", "swh", "szl", "tam", "taq", "tat", "tel", "tgk", "tgl", "tha", "tir", "tpi", "tsn", "tso", "tuk", "tum", "tur", "twi", "tzm", "uig", "ukr", "umb", "urd", "uzn", "vec", "vie", "war", "wol", "xho", "ydd", "yor", "yue", "zho", "zsm", "zul"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|flores"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "paperswithcode_id": "flores", "pretty_name": "flores200", "tags": ["conditional-text-generation"]}
2024-01-07T18:12:19+00:00
[ "2207.04672" ]
[ "ace", "acm", "acq", "aeb", "afr", "ajp", "aka", "als", "amh", "apc", "arb", "ars", "ary", "arz", "asm", "ast", "awa", "ayr", "azb", "azj", "bak", "bam", "ban", "bel", "bem", "ben", "bho", "bjn", "bod", "bos", "bug", "bul", "cat", "ceb", "ces", "cjk", "ckb", "crh", "cym", "dan", "deu", "dik", "dyu", "dzo", "ell", "eng", "epo", "est", "eus", "ewe", "fao", "fij", "fin", "fon", "fra", "fur", "fuv", "gaz", "gla", "gle", "glg", "grn", "guj", "hat", "hau", "heb", "hin", "hne", "hrv", "hun", "hye", "ibo", "ilo", "ind", "isl", "ita", "jav", "jpn", "kab", "kac", "kam", "kan", "kas", "kat", "kaz", "kbp", "kea", "khk", "khm", "kik", "kin", "kir", "kmb", "kmr", "knc", "kon", "kor", "lao", "lij", "lim", "lin", "lit", "lmo", "ltg", "ltz", "lua", "lug", "luo", "lus", "lvs", "mag", "mai", "mal", "mar", "min", "mkd", "mlt", "mni", "mos", "mri", "mya", "nld", "nno", "nob", "npi", "nso", "nus", "nya", "oci", "ory", "pag", "pan", "pap", "pbt", "pes", "plt", "pol", "por", "prs", "quy", "ron", "run", "rus", "sag", "san", "sat", "scn", "shn", "sin", "slk", "slv", "smo", "sna", "snd", "som", "sot", "spa", "srd", "srp", "ssw", "sun", "swe", "swh", "szl", "tam", "taq", "tat", "tel", "tgk", "tgl", "tha", "tir", "tpi", "tsn", "tso", "tuk", "tum", "tur", "twi", "tzm", "uig", "ukr", "umb", "urd", "uzn", "vec", "vie", "war", "wol", "xho", "ydd", "yor", "yue", "zho", "zsm", "zul" ]
TAGS #task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-Achinese #language-Mesopotamian Arabic #language-Ta'izzi-Adeni Arabic #language-Tunisian Arabic #language-Afrikaans #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Amharic #language-Levantine Arabic #language-Standard Arabic #language-Najdi Arabic #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Awadhi #language-Central Aymara #language-South Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-Bhojpuri #language-Banjar #language-Tibetan #language-Bosnian #language-Buginese #language-Bulgarian #language-Catalan #language-Cebuano #language-Czech #language-Chokwe #language-Central Kurdish #language-Crimean Tatar #language-Welsh #language-Danish #language-German #language-Southwestern Dinka #language-Dyula #language-Dzongkha #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Faroese #language-Fijian #language-Finnish #language-Fon #language-French #language-Friulian #language-Nigerian Fulfulde #language-West Central Oromo #language-Scottish Gaelic #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Chhattisgarhi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Iloko #language-Indonesian #language-Icelandic #language-Italian #language-Javanese #language-Japanese #language-Kabyle #language-Kachin #language-Kamba (Kenya) #language-Kannada #language-Kashmiri #language-Georgian #language-Kazakh #language-Kabiyè #language-Kabuverdianu #language-Halh Mongolian #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Kimbundu #language-Northern Kurdish #language-Central Kanuri #language-Kongo #language-Korean #language-Lao #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lombard #language-Latgalian #language-Luxembourgish #language-Luba-Lulua #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Magahi #language-Maithili #language-Malayalam #language-Marathi #language-Minangkabau #language-Macedonian #language-Maltese #language-Manipuri #language-Mossi #language-Maori #language-Burmese #language-Dutch #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Nepali (individual language) #language-Pedi #language-Nuer #language-Nyanja #language-Occitan (post 1500) #language-Odia #language-Pangasinan #language-Panjabi #language-Papiamento #language-Southern Pashto #language-Iranian Persian #language-Plateau Malagasy #language-Polish #language-Portuguese #language-Dari #language-Ayacucho Quechua #language-Romanian #language-Rundi #language-Russian #language-Sango #language-Sanskrit #language-Santali #language-Sicilian #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Southern Sotho #language-Spanish #language-Sardinian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Silesian #language-Tamil #language-Tamasheq #language-Tatar #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tok Pisin #language-Tswana #language-Tsonga #language-Turkmen #language-Tumbuka #language-Turkish #language-Twi #language-Central Atlas Tamazight #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Northern Uzbek #language-Venetian #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Eastern Yiddish #language-Yoruba #language-Yue Chinese #language-Chinese #language-Standard Malay #language-Zulu #license-cc-by-sa-4.0 #conditional-text-generation #arxiv-2207.04672 #region-us
Dataset Card for Flores200 ========================== Table of Contents ----------------- * Dataset Card for Flores200 + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation + Additional Information - Dataset Curators - Licensing Information - Citation Information Dataset Description ------------------- * Home: Flores * Repository: Github ### Dataset Summary FLORES is a benchmark dataset for machine translation between English and low-resource languages. > > The creation of FLORES200 doubles the existing language coverage of FLORES-101. > Given the nature of the new languages, which have less standardization and require > more specialized professional translations, the verification process became more complex. > This required modifications to the translation workflow. FLORES-200 has several languages > which were not translated from English. Specifically, several languages were translated > from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also > includes two script alternatives for four languages. FLORES-200 consists of translations > from 842 distinct web articles, totaling 3001 sentences. These sentences are divided > into three splits: dev, devtest, and test (hidden). On average, sentences are approximately > 21 words long. > > > Disclaimer: \*The Flores200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this. ### Languages The dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., "eng\_Latn", "ukr\_Cyrl"). See the webpage for code descriptions. Use the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command. Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng\_Latn-ukr\_Cyrl" will provide sentences in the format below). Dataset Structure ----------------- ### Data Instances A sample from the 'dev' split for the Ukrainian language ('ukr\_Cyrl' config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits. When using a hyphenated pairing or using the 'all' function, data will be presented as follows: The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields * 'id': Row number for the data entry, starting at 1. * 'sentence': The full sentence in the specific language (may have \_lang for pairings) * 'URL': The URL for the English article from which the sentence was extracted. * 'domain': The domain of the sentence. * 'topic': The topic of the sentence. * 'has\_image': Whether the original article contains an image. * 'has\_hyperlink': Whether the sentence contains a hyperlink. ### Data Splits ### Dataset Creation Please refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation. Additional Information ---------------------- ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available here. Please cite the authors if you use these corpora in your work:
[ "### Dataset Summary\n\n\nFLORES is a benchmark dataset for machine translation between English and low-resource languages.\n\n\n\n> \n> The creation of FLORES200 doubles the existing language coverage of FLORES-101.\n> Given the nature of the new languages, which have less standardization and require\n> more specialized professional translations, the verification process became more complex.\n> This required modifications to the translation workflow. FLORES-200 has several languages\n> which were not translated from English. Specifically, several languages were translated\n> from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also\n> includes two script alternatives for four languages. FLORES-200 consists of translations\n> from 842 distinct web articles, totaling 3001 sentences. These sentences are divided\n> into three splits: dev, devtest, and test (hidden). On average, sentences are approximately\n> 21 words long.\n> \n> \n> \n\n\nDisclaimer: \\*The Flores200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.", "### Supported Tasks and Leaderboards", "#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.", "### Languages\n\n\nThe dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., \"eng\\_Latn\", \"ukr\\_Cyrl\"). See the webpage for code descriptions.\nUse the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command.\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-ukr\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the 'dev' split for the Ukrainian language ('ukr\\_Cyrl' config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.\n\n\nWhen using a hyphenated pairing or using the 'all' function, data will be presented as follows:\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.", "### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'URL': The URL for the English article from which the sentence was extracted.\n* 'domain': The domain of the sentence.\n* 'topic': The topic of the sentence.\n* 'has\\_image': Whether the original article contains an image.\n* 'has\\_hyperlink': Whether the sentence contains a hyperlink.", "### Data Splits", "### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nSee paper for details.", "### Licensing Information\n\n\nLicensed with Creative Commons Attribution Share Alike 4.0. License available here.\n\n\nPlease cite the authors if you use these corpora in your work:" ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-Achinese #language-Mesopotamian Arabic #language-Ta'izzi-Adeni Arabic #language-Tunisian Arabic #language-Afrikaans #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Amharic #language-Levantine Arabic #language-Standard Arabic #language-Najdi Arabic #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Awadhi #language-Central Aymara #language-South Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-Bhojpuri #language-Banjar #language-Tibetan #language-Bosnian #language-Buginese #language-Bulgarian #language-Catalan #language-Cebuano #language-Czech #language-Chokwe #language-Central Kurdish #language-Crimean Tatar #language-Welsh #language-Danish #language-German #language-Southwestern Dinka #language-Dyula #language-Dzongkha #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Faroese #language-Fijian #language-Finnish #language-Fon #language-French #language-Friulian #language-Nigerian Fulfulde #language-West Central Oromo #language-Scottish Gaelic #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Chhattisgarhi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Iloko #language-Indonesian #language-Icelandic #language-Italian #language-Javanese #language-Japanese #language-Kabyle #language-Kachin #language-Kamba (Kenya) #language-Kannada #language-Kashmiri #language-Georgian #language-Kazakh #language-Kabiyè #language-Kabuverdianu #language-Halh Mongolian #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Kimbundu #language-Northern Kurdish #language-Central Kanuri #language-Kongo #language-Korean #language-Lao #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lombard #language-Latgalian #language-Luxembourgish #language-Luba-Lulua #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Magahi #language-Maithili #language-Malayalam #language-Marathi #language-Minangkabau #language-Macedonian #language-Maltese #language-Manipuri #language-Mossi #language-Maori #language-Burmese #language-Dutch #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Nepali (individual language) #language-Pedi #language-Nuer #language-Nyanja #language-Occitan (post 1500) #language-Odia #language-Pangasinan #language-Panjabi #language-Papiamento #language-Southern Pashto #language-Iranian Persian #language-Plateau Malagasy #language-Polish #language-Portuguese #language-Dari #language-Ayacucho Quechua #language-Romanian #language-Rundi #language-Russian #language-Sango #language-Sanskrit #language-Santali #language-Sicilian #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Southern Sotho #language-Spanish #language-Sardinian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Silesian #language-Tamil #language-Tamasheq #language-Tatar #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tok Pisin #language-Tswana #language-Tsonga #language-Turkmen #language-Tumbuka #language-Turkish #language-Twi #language-Central Atlas Tamazight #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Northern Uzbek #language-Venetian #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Eastern Yiddish #language-Yoruba #language-Yue Chinese #language-Chinese #language-Standard Malay #language-Zulu #license-cc-by-sa-4.0 #conditional-text-generation #arxiv-2207.04672 #region-us \n", "### Dataset Summary\n\n\nFLORES is a benchmark dataset for machine translation between English and low-resource languages.\n\n\n\n> \n> The creation of FLORES200 doubles the existing language coverage of FLORES-101.\n> Given the nature of the new languages, which have less standardization and require\n> more specialized professional translations, the verification process became more complex.\n> This required modifications to the translation workflow. FLORES-200 has several languages\n> which were not translated from English. Specifically, several languages were translated\n> from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also\n> includes two script alternatives for four languages. FLORES-200 consists of translations\n> from 842 distinct web articles, totaling 3001 sentences. These sentences are divided\n> into three splits: dev, devtest, and test (hidden). On average, sentences are approximately\n> 21 words long.\n> \n> \n> \n\n\nDisclaimer: \\*The Flores200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.", "### Supported Tasks and Leaderboards", "#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.", "### Languages\n\n\nThe dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., \"eng\\_Latn\", \"ukr\\_Cyrl\"). See the webpage for code descriptions.\nUse the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command.\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-ukr\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the 'dev' split for the Ukrainian language ('ukr\\_Cyrl' config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.\n\n\nWhen using a hyphenated pairing or using the 'all' function, data will be presented as follows:\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.", "### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'URL': The URL for the English article from which the sentence was extracted.\n* 'domain': The domain of the sentence.\n* 'topic': The topic of the sentence.\n* 'has\\_image': Whether the original article contains an image.\n* 'has\\_hyperlink': Whether the sentence contains a hyperlink.", "### Data Splits", "### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nSee paper for details.", "### Licensing Information\n\n\nLicensed with Creative Commons Attribution Share Alike 4.0. License available here.\n\n\nPlease cite the authors if you use these corpora in your work:" ]
[ 1311, 239, 10, 57, 179, 105, 122, 5, 42, 11, 35 ]
[ "passage: ", "passage: TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-Achinese #language-Mesopotamian Arabic #language-Ta'izzi-Adeni Arabic #language-Tunisian Arabic #language-Afrikaans #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Amharic #language-Levantine Arabic #language-Standard Arabic #language-Najdi Arabic #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Awadhi #language-Central Aymara #language-South Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-Bhojpuri #language-Banjar #language-Tibetan #language-Bosnian #language-Buginese #language-Bulgarian #language-Catalan #language-Cebuano #language-Czech #language-Chokwe #language-Central Kurdish #language-Crimean Tatar #language-Welsh #language-Danish #language-German #language-Southwestern Dinka #language-Dyula #language-Dzongkha #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Faroese #language-Fijian #language-Finnish #language-Fon #language-French #language-Friulian #language-Nigerian Fulfulde #language-West Central Oromo #language-Scottish Gaelic #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Chhattisgarhi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Iloko #language-Indonesian #language-Icelandic #language-Italian #language-Javanese #language-Japanese #language-Kabyle #language-Kachin #language-Kamba (Kenya) #language-Kannada #language-Kashmiri #language-Georgian #language-Kazakh #language-Kabiyè #language-Kabuverdianu #language-Halh Mongolian #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Kimbundu #language-Northern Kurdish #language-Central Kanuri #language-Kongo #language-Korean #language-Lao #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lombard #language-Latgalian #language-Luxembourgish #language-Luba-Lulua #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Magahi #language-Maithili #language-Malayalam #language-Marathi #language-Minangkabau #language-Macedonian #language-Maltese #language-Manipuri #language-Mossi #language-Maori #language-Burmese #language-Dutch #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Nepali (individual language) #language-Pedi #language-Nuer #language-Nyanja #language-Occitan (post 1500) #language-Odia #language-Pangasinan #language-Panjabi #language-Papiamento #language-Southern Pashto #language-Iranian Persian #language-Plateau Malagasy #language-Polish #language-Portuguese #language-Dari #language-Ayacucho Quechua #language-Romanian #language-Rundi #language-Russian #language-Sango #language-Sanskrit #language-Santali #language-Sicilian #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Southern Sotho #language-Spanish #language-Sardinian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Silesian #language-Tamil #language-Tamasheq #language-Tatar #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tok Pisin #language-Tswana #language-Tsonga #language-Turkmen #language-Tumbuka #language-Turkish #language-Twi #language-Central Atlas Tamazight #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Northern Uzbek #language-Venetian #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Eastern Yiddish #language-Yoruba #language-Yue Chinese #language-Chinese #language-Standard Malay #language-Zulu #license-cc-by-sa-4.0 #conditional-text-generation #arxiv-2207.04672 #region-us \n### Dataset Summary\n\n\nFLORES is a benchmark dataset for machine translation between English and low-resource languages.\n\n\n\n> \n> The creation of FLORES200 doubles the existing language coverage of FLORES-101.\n> Given the nature of the new languages, which have less standardization and require\n> more specialized professional translations, the verification process became more complex.\n> This required modifications to the translation workflow. FLORES-200 has several languages\n> which were not translated from English. Specifically, several languages were translated\n> from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also\n> includes two script alternatives for four languages. FLORES-200 consists of translations\n> from 842 distinct web articles, totaling 3001 sentences. These sentences are divided\n> into three splits: dev, devtest, and test (hidden). On average, sentences are approximately\n> 21 words long.\n> \n> \n> \n\n\nDisclaimer: \\*The Flores200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.### Supported Tasks and Leaderboards#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.### Languages\n\n\nThe dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., \"eng\\_Latn\", \"ukr\\_Cyrl\"). See the webpage for code descriptions.\nUse the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command.\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-ukr\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------" ]
85b9612b440ac0158d5722d0d45b849a012468ec
Opensource DataSet form a Kaggle competition https://www.kaggle.com/datasets/andreibuliga1/gdpr-fines-20182020-updated-23012021 GDPR-fines is a dataset with summary of GDPR cases from companies that were find between 2018 and 2021. You will find the summary plus the Articles violated in the cases (3 most importants + "Others" regrouping the rest of articles). Raw text and lemmatized text available plus multi-labels.
Maxmioti/GDRP-fines
[ "license:other", "region:us" ]
2022-07-17T08:57:46+00:00
{"license": "other"}
2022-07-17T09:03:34+00:00
[]
[]
TAGS #license-other #region-us
Opensource DataSet form a Kaggle competition URL GDPR-fines is a dataset with summary of GDPR cases from companies that were find between 2018 and 2021. You will find the summary plus the Articles violated in the cases (3 most importants + "Others" regrouping the rest of articles). Raw text and lemmatized text available plus multi-labels.
[]
[ "TAGS\n#license-other #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-other #region-us \n" ]
20aac0fbc008c17c735fe27c2eb410edbdc84381
# Dataset Summary The Great Books dataset is a set of texts based on the [St. John's College Great Books Program](https://www.sjc.edu/academic-programs/undergraduate/classes/seminar/annapolis-undergraduate-readings). It includes 83 works from authors on the Program.
erikanesse/great_books
[ "license:unlicense", "region:us" ]
2022-07-17T11:59:45+00:00
{"license": "unlicense"}
2022-07-17T12:47:12+00:00
[]
[]
TAGS #license-unlicense #region-us
# Dataset Summary The Great Books dataset is a set of texts based on the St. John's College Great Books Program. It includes 83 works from authors on the Program.
[ "# Dataset Summary\nThe Great Books dataset is a set of texts based on the St. John's College Great Books Program. It includes 83 works from authors on the Program." ]
[ "TAGS\n#license-unlicense #region-us \n", "# Dataset Summary\nThe Great Books dataset is a set of texts based on the St. John's College Great Books Program. It includes 83 works from authors on the Program." ]
[ 13, 40 ]
[ "passage: TAGS\n#license-unlicense #region-us \n# Dataset Summary\nThe Great Books dataset is a set of texts based on the St. John's College Great Books Program. It includes 83 works from authors on the Program." ]
e98b9216f60fc8dbabfe766e014534a08ff01949
## XWinograd Multilingual winograd schema challenge as used in [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786). ### Languages & Samples - "en": 2325 - "fr": 83 - "jp": 959 - "pt": 263 - "ru": 315 - "zh": 504 ### Dataset creation The Winograd schema challenges in this dataset combine winograd schemas from the XWinograd dataset introduced in Tikhonov et al and as it only contains 16 Chinese schemas, we add 488 Chinese schemas from `clue/cluewsc2020`. If you only want the original xwinograd chinese schemas only, do: `load_dataset("Muennighoff/xwinograd", "zh")["test"][0][:16]` ## Additional Information ### Citation Information ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{tikhonov2021heads, title={It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning}, author={Alexey Tikhonov and Max Ryabinin}, year={2021}, eprint={2106.12066}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### License Like the original [English winograd schema challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html), this dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). I.e. you can use it for commercial purposes etc. :) ### Contributions Thanks to Jordan Clive, @yongzx & @khalidalt for support on adding Chinese.
Muennighoff/xwinograd
[ "language:en", "language:fr", "language:ja", "language:pt", "language:ru", "language:zh", "license:cc-by-4.0", "arxiv:2211.01786", "arxiv:2106.12066", "region:us" ]
2022-07-17T14:20:09+00:00
{"language": ["en", "fr", "ja", "pt", "ru", "zh"], "license": "cc-by-4.0"}
2023-07-07T07:27:03+00:00
[ "2211.01786", "2106.12066" ]
[ "en", "fr", "ja", "pt", "ru", "zh" ]
TAGS #language-English #language-French #language-Japanese #language-Portuguese #language-Russian #language-Chinese #license-cc-by-4.0 #arxiv-2211.01786 #arxiv-2106.12066 #region-us
## XWinograd Multilingual winograd schema challenge as used in Crosslingual Generalization through Multitask Finetuning. ### Languages & Samples - "en": 2325 - "fr": 83 - "jp": 959 - "pt": 263 - "ru": 315 - "zh": 504 ### Dataset creation The Winograd schema challenges in this dataset combine winograd schemas from the XWinograd dataset introduced in Tikhonov et al and as it only contains 16 Chinese schemas, we add 488 Chinese schemas from 'clue/cluewsc2020'. If you only want the original xwinograd chinese schemas only, do: 'load_dataset("Muennighoff/xwinograd", "zh")["test"][0][:16]' ## Additional Information ### License Like the original English winograd schema challenge, this dataset is licensed under CC BY 4.0. I.e. you can use it for commercial purposes etc. :) ### Contributions Thanks to Jordan Clive, @yongzx & @khalidalt for support on adding Chinese.
[ "## XWinograd\n\nMultilingual winograd schema challenge as used in Crosslingual Generalization through Multitask Finetuning.", "### Languages & Samples\n\n- \"en\": 2325\n- \"fr\": 83\n- \"jp\": 959\n- \"pt\": 263 \n- \"ru\": 315\n- \"zh\": 504", "### Dataset creation\n\nThe Winograd schema challenges in this dataset combine winograd schemas from the XWinograd dataset introduced in Tikhonov et al and as it only contains 16 Chinese schemas, we add 488 Chinese schemas from 'clue/cluewsc2020'.\n\nIf you only want the original xwinograd chinese schemas only, do:\n\n'load_dataset(\"Muennighoff/xwinograd\", \"zh\")[\"test\"][0][:16]'", "## Additional Information", "### License\n\nLike the original English winograd schema challenge, this dataset is licensed under CC BY 4.0. I.e. you can use it for commercial purposes etc. :)", "### Contributions\n\nThanks to Jordan Clive, @yongzx & @khalidalt for support on adding Chinese." ]
[ "TAGS\n#language-English #language-French #language-Japanese #language-Portuguese #language-Russian #language-Chinese #license-cc-by-4.0 #arxiv-2211.01786 #arxiv-2106.12066 #region-us \n", "## XWinograd\n\nMultilingual winograd schema challenge as used in Crosslingual Generalization through Multitask Finetuning.", "### Languages & Samples\n\n- \"en\": 2325\n- \"fr\": 83\n- \"jp\": 959\n- \"pt\": 263 \n- \"ru\": 315\n- \"zh\": 504", "### Dataset creation\n\nThe Winograd schema challenges in this dataset combine winograd schemas from the XWinograd dataset introduced in Tikhonov et al and as it only contains 16 Chinese schemas, we add 488 Chinese schemas from 'clue/cluewsc2020'.\n\nIf you only want the original xwinograd chinese schemas only, do:\n\n'load_dataset(\"Muennighoff/xwinograd\", \"zh\")[\"test\"][0][:16]'", "## Additional Information", "### License\n\nLike the original English winograd schema challenge, this dataset is licensed under CC BY 4.0. I.e. you can use it for commercial purposes etc. :)", "### Contributions\n\nThanks to Jordan Clive, @yongzx & @khalidalt for support on adding Chinese." ]
[ 64, 29, 47, 114, 5, 39, 26 ]
[ "passage: TAGS\n#language-English #language-French #language-Japanese #language-Portuguese #language-Russian #language-Chinese #license-cc-by-4.0 #arxiv-2211.01786 #arxiv-2106.12066 #region-us \n## XWinograd\n\nMultilingual winograd schema challenge as used in Crosslingual Generalization through Multitask Finetuning.### Languages & Samples\n\n- \"en\": 2325\n- \"fr\": 83\n- \"jp\": 959\n- \"pt\": 263 \n- \"ru\": 315\n- \"zh\": 504### Dataset creation\n\nThe Winograd schema challenges in this dataset combine winograd schemas from the XWinograd dataset introduced in Tikhonov et al and as it only contains 16 Chinese schemas, we add 488 Chinese schemas from 'clue/cluewsc2020'.\n\nIf you only want the original xwinograd chinese schemas only, do:\n\n'load_dataset(\"Muennighoff/xwinograd\", \"zh\")[\"test\"][0][:16]'## Additional Information### License\n\nLike the original English winograd schema challenge, this dataset is licensed under CC BY 4.0. I.e. you can use it for commercial purposes etc. :)### Contributions\n\nThanks to Jordan Clive, @yongzx & @khalidalt for support on adding Chinese." ]
787af29673533c61886956a44fb0093850abed52
# Dataset Card for OpenFire ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://pyronear.org/pyro-vision/datasets.html#openfire - **Repository:** https://github.com/pyronear/pyro-vision - **Point of Contact:** Pyronear <https://pyronear.org/en/> ### Dataset Summary OpenFire is an image classification dataset for wildfire detection, collected from web searches. ### Supported Tasks and Leaderboards - `image-classification`: The dataset can be used to train a model for Image Classification. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image URL and its binary label. ``` { 'image_url': 'https://cdn-s-www.ledauphine.com/images/13C08274-6BA6-4577-B3A0-1E6C1B2A573C/FB1200/photo-1338240831.jpg', 'is_wildfire': true, } ``` ### Data Fields - `image_url`: the download URL of the image. - `is_wildfire`: a boolean value specifying whether there is an ongoing wildfire on the image. ### Data Splits The data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images. ## Dataset Creation ### Curation Rationale The curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds, making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping with sun flares, foggy / cloudy weather conditions and small scale. ### Source Data #### Initial Data Collection and Normalization OpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors. ### Annotations #### Annotation process Each web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors. #### Who are the annotators? François-Guillaume Fernandez ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators François-Guillaume Fernandez ### Licensing Information [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @software{Pyronear_PyroVision_2019, title={Pyrovision: wildfire early detection}, author={Pyronear contributors}, year={2019}, month={October}, publisher = {GitHub}, howpublished = {\url{https://github.com/pyronear/pyro-vision}} } ```
pyronear/openfire
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "size_categories:1K<n<10K", "source_datasets:original", "license:apache-2.0", "region:us" ]
2022-07-17T15:11:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "Wildfire image classification dataset collected using images from web searches."}
2022-12-11T22:25:43+00:00
[]
[]
TAGS #task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-original #license-apache-2.0 #region-us
# Dataset Card for OpenFire ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Point of Contact: Pyronear <URL ### Dataset Summary OpenFire is an image classification dataset for wildfire detection, collected from web searches. ### Supported Tasks and Leaderboards - 'image-classification': The dataset can be used to train a model for Image Classification. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image URL and its binary label. ### Data Fields - 'image_url': the download URL of the image. - 'is_wildfire': a boolean value specifying whether there is an ongoing wildfire on the image. ### Data Splits The data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images. ## Dataset Creation ### Curation Rationale The curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds, making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping with sun flares, foggy / cloudy weather conditions and small scale. ### Source Data #### Initial Data Collection and Normalization OpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors. ### Annotations #### Annotation process Each web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors. #### Who are the annotators? François-Guillaume Fernandez ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators François-Guillaume Fernandez ### Licensing Information Apache License 2.0.
[ "# Dataset Card for OpenFire", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Pyronear <URL", "### Dataset Summary\n\nOpenFire is an image classification dataset for wildfire detection, collected\nfrom web searches.", "### Supported Tasks and Leaderboards\n\n- 'image-classification': The dataset can be used to train a model for Image Classification.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nA data point comprises an image URL and its binary label.", "### Data Fields\n\n- 'image_url': the download URL of the image.\n- 'is_wildfire': a boolean value specifying whether there is an ongoing wildfire on the image.", "### Data Splits\n\nThe data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images.", "## Dataset Creation", "### Curation Rationale\n\nThe curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds,\nmaking it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping\nwith sun flares, foggy / cloudy weather conditions and small scale.", "### Source Data", "#### Initial Data Collection and Normalization\n\nOpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors.", "### Annotations", "#### Annotation process\n\nEach web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors.", "#### Who are the annotators?\n\nFrançois-Guillaume Fernandez", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nFrançois-Guillaume Fernandez", "### Licensing Information\n\nApache License 2.0." ]
[ "TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-original #license-apache-2.0 #region-us \n", "# Dataset Card for OpenFire", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Pyronear <URL", "### Dataset Summary\n\nOpenFire is an image classification dataset for wildfire detection, collected\nfrom web searches.", "### Supported Tasks and Leaderboards\n\n- 'image-classification': The dataset can be used to train a model for Image Classification.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nA data point comprises an image URL and its binary label.", "### Data Fields\n\n- 'image_url': the download URL of the image.\n- 'is_wildfire': a boolean value specifying whether there is an ongoing wildfire on the image.", "### Data Splits\n\nThe data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images.", "## Dataset Creation", "### Curation Rationale\n\nThe curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds,\nmaking it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping\nwith sun flares, foggy / cloudy weather conditions and small scale.", "### Source Data", "#### Initial Data Collection and Normalization\n\nOpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors.", "### Annotations", "#### Annotation process\n\nEach web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors.", "#### Who are the annotators?\n\nFrançois-Guillaume Fernandez", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nFrançois-Guillaume Fernandez", "### Licensing Information\n\nApache License 2.0." ]
[ 69, 8, 125, 24, 29, 33, 5, 6, 20, 46, 34, 5, 102, 4, 50, 5, 37, 15, 8, 8, 7, 8, 7, 5, 12, 11 ]
[ "passage: TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-original #license-apache-2.0 #region-us \n# Dataset Card for OpenFire## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Pyronear <URL### Dataset Summary\n\nOpenFire is an image classification dataset for wildfire detection, collected\nfrom web searches.### Supported Tasks and Leaderboards\n\n- 'image-classification': The dataset can be used to train a model for Image Classification.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nA data point comprises an image URL and its binary label.### Data Fields\n\n- 'image_url': the download URL of the image.\n- 'is_wildfire': a boolean value specifying whether there is an ongoing wildfire on the image.### Data Splits\n\nThe data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images.## Dataset Creation### Curation Rationale\n\nThe curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds,\nmaking it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping\nwith sun flares, foggy / cloudy weather conditions and small scale." ]
3f8fe90b59fe1958fe39583b5d74e398d882f1ed
# Dataset Card for clmet_3_1 **NOTES**: - Some of the annotations in the `class` and `pos` configs are not properly formed. These are indicated with warning messages when the dataset is loaded. - In addition to the classes mentioned in the README for the dataset, there is an additional class in the `class` dataset called `QUOT`. As far as I can tell, this is used for tagging all quotation marks - When the `class` and `pos` configs are loaded, the available class/pos tags are shown at the top ## Dataset Statistics: The following table summarises the corpus make-up: |PERIOD | #authors | #texts |CQP3.1 | non-PUNC | |-----------|----------|---------------------|--------|---------| |1710-1780 | 51 | 88 | 12,182,064 | 10,415,721| |1780-1850 | 70 | 99 | 13,300,457 | 11,269,977| |1850-1920 | 91 | 146 | 14,858,239 | 12,657,159| |TOTAL | 212 | 333 | 40,340,760 | 34,342,857| |GENRE (all tokens):| | | | |---|---|---|---| | | **1710-1780**| **1780-1850** | **1850-1920** | |Narrative fiction | 5,405,645 | 5,780,352 | 7,561,339 | |Narrative non-fiction | 2,145,946 | 2,261,485 | 1,097,487 | |Drama | 523,318 | 441,040 | 763,352 | |Letters | 1,208,219 | 842,795 | 554,046 | |Treatise | 1,263,090 | 1,927,272 | 2,030,210 | |Other | 1,635,846 | 2,047,513 | 2,851,805 | ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** http://fedora.clarin-d.uni-saarland.de/clmet/clmet.html - **Repository:** [Needs More Information] - **Paper:** https://icame.info/icame_static/ij29/ij29-page69-82.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Henrik De Smet](https://www.arts.kuleuven.be/ling/func/members/hendrik-desmet/func) ### Dataset Summary The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is: - The corpus covers the period 17101920, divided into three 70-year sub-periods. - The texts making up the corpus have all been written by British and Irish authors who are native speakers of English. - The corpus never contains more than three texts by the same author. - The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period. ### Supported Tasks and Leaderboards - `named-entity-recognition`: Since this dataset is tagged, it can be used for performing NER - `text-classification`: Each text comes with the date of the text and can be used to perform stylistic classification of texts ### Languages The text in the dataset is in English. The associated BCP-47 code is `en` ## Dataset Structure ### Data Instances A `plain` sample looks as follows: ``` {'text': "\nFAME AND THE POET\n \nDRAMATIS PERSONAE�\n \nHarry de Reves , a Poet .\n \n( This name , though of course of French origin , has become anglicised and is pronounced de Reevs . )\n \nDick Prattle , a Lieutenant-Major of the Royal Horse Marines .\n \nFame .\n \nScene\n \nThe Poet 's rooms in London .\nWindows in back .\nA high screen in a corner .\n \nTime : February 30th .\n \nThe Poet is sitting at a table writing .\n \n[ Enter Dick Prattle .\n \nPrattle : Hullo , Harry .\n \nde Reves : Hullo , Dick .\nGood Lord , where are you from ?\n \nPrattle ( casually ) : The ends of the earth .\n \nde Reves : Well , I 'm damned !\n \nPrattle : Thought I 'd drop in and see how you were getting on .\n \nde Reves : Well , that 's splendid .\nWhat are you doing in London ?\n \nPrattle : Well , I wanted to see if I could get one or two decent ties to wear - you can get nothing out there - then I thought I 'd have a look and see how London was getting on .\n \nde Reves : Splendid !\nHow 's everybody ?\n \nPrattle : All going strong .\n \nde Reves : That 's good .\n \nPrattle ( seeing paper and ink ) : But what are you doing ?\n \nde Reves : Writing .\n \nPrattle : Writing ?\nI did n't know you wrote .\n \nde Reves : Yes , I 've taken to it rather .\n \nPrattle : I say - writing 's no good .\nWhat do you write ?\n \nde Reves : Oh , poetry .\n \nPrattle : Poetry !\nGood Lord !\n \nde Reves : Yes , that sort of thing , you know .\n \nPrattle : Good Lord !\nDo you make any money by it ?\n \nde Reves : No .\nHardly any .\n \nPrattle : I say - why do n't you chuck it ?\n \nde Reves : Oh , I do n't know .\nSome people seem to like my stuff , rather .\nThat 's why I go on .\n \nPrattle : I 'd chuck it if there 's no money in it .\n \nde Reves : Ah , but then it 's hardly in your line , is it ?\nYou 'd hardly approve of poetry if there was money in it .\n \nPrattle : Oh , I do n't say that .\nIf I could make as much by poetry as I can by betting I do n't say I would n't try the poetry touch , only - -\n \nde Reves : Only what ?\n \nPrattle : Oh , I do n't know .\nOnly there seems more sense in betting , somehow .\n \nde Reves : Well , yes .\nI suppose it 's easier to tell what an earthly horse is going to do , than to tell what Pegasus - -\n \nPrattle : What 's Pegasus ?\n \nde Reves : Oh , the winged horse of poets .\n \nPrattle : I say !\nYou do n't believe in a winged horse , do you ?\n \nde Reves : In our trade we believe in all fabulous things .\nThey all represent some large truth to us .\nAn emblem like Pegasus is as real a thing to a poet as a Derby winner would be to you .\n \nPrattle : I say .\n( Give me a cigarette .\nThanks . )\nWhat ?\nThen you 'd believe in nymphs and fauns , and Pan , and all those kind of birds ?\n \nde Reves : Yes .\nYes .\nIn all of them .\n \nPrattle : Good Lord !\n \nde Reves : You believe in the Lord Mayor of London , do n't you ?\n \nPrattle : Yes , of course ; but what has - -\n \nde Reves : Four million people or so made him Lord Mayor , did n't they ?\nAnd he represents to them the wealth and dignity and tradition of - -\n \nPrattle : Yes ; but , I say , what has all this - -\n \nde Reves : Well , he stands for an idea to them , and they made him Lord Mayor , and so he is one ...\n \nPrattle : Well , of course he is .\n \nde Reves : In the same way Pan has been made what he is by millions ; by millions to whom he represents world-old traditions .\n \nPrattle ( rising from his chair and stepping backwards , laughing and looking at the Poet in a kind of assumed wonder ) : I say ... I say ... You old heathen ... but Good Lord ...\n \n[ He bumps into the high screen behind , pushing it back a little .\n \nde Reves : Look out !\nLook out !\n \nPrattle : What ?\nWhat 's the matter ?\n \nde Reves : The screen !\n \nPrattle : Oh , sorry , yes .\nI 'll put it right .\n \n[ He is about to go round behind it .\n \nde Reves : No , do n't go round there .\n \nPrattle : What ?\nWhy not ?\n \nde Reves : Oh , you would n't understand .\n \nPrattle : Would n't understand ?\nWhy , what have you got ?\n \nde Reves : Oh , one of those things ... You would n't understand .\n \nPrattle : Of course I 'd understand .\nLet 's have a look .\n \n[ The Poet walks towards Prattle and the screen .\nHe protests no further .\nPrattle looks round the corner of the screen .\n \nAn altar .\n \nde Reves ( removing the screen altogether ) : That is all .\nWhat do you make of it ?\n \n[ An altar of Greek design , shaped like a pedestal , is revealed .\nPapers litter the floor all about it .\n \nPrattle : I say - you always were an untidy devil .\n \nde Reves : Well , what do you make of it ?\n \nPrattle : It reminds me of your room at Eton .\n \nde Reves : My room at Eton ?\n \nPrattle : Yes , you always had papers all over your floor .\n \nde Reves : Oh , yes - -\n \nPrattle : And what are these ?\n \nde Reves : All these are poems ; and this is my altar to Fame .\n \nPrattle : To Fame ?\n \nde Reves : The same that Homer knew .\n \nPrattle : Good Lord !\n \nde Reves : Keats never saw her .\nShelley died too young .\nShe came late at the best of times , now scarcely ever .\n \nPrattle : But , my dear fellow , you do n't mean that you think there really is such a person ?\n \nde Reves : I offer all my songs to her .\n \nPrattle : But you do n't mean you think you could actually see Fame ?\n \nde Reves : We poets personify abstract things , and not poets only but sculptors7 and painters too .\nAll the great things of the world are those abstract things .\n \nPrattle : But what I mean is , they 're not really there , like you or me .\n \nde Reves : To us these things are more real than men , they outlive generations , they watch the passing of kingdoms : we go by them like dust ; they are still there , unmoved , unsmiling .\n \nPrattle : But , but , you ca n't think that you could see Fame , you do n't expect to see it ?\n \nde Reves : Not to me .\nNever to me .\nShe of the golden trumpet and Greek dress will never appear to me ... We all have our dreams .\n \nPrattle : I say - what have you been doing all day ?\n \nde Reves : I ?\nOh , only writing a sonnet .\n \nPrattle : Is it a long one ?\n \nde Reves : Not very .\n \nPrattle : About how long is it ?\n \nde Reves : About fourteen lines .\n \nPrattle ( impressively ) : I tell you what it is .\n \nde Reves : Yes ?\n \nPrattle : I tell you what .\nYou 've been overworking yourself .\nI once got like that on board the Sandhurst , working for the passing-out exam .\nI got so bad that I could have seen anything .\n \nde Reves : Seen anything ?\n \nPrattle : Lord , yes ; horned pigs , snakes with wings ; anything ; one of your winged horses even .\nThey gave me some stuff called bromide for it .\nYou take a rest .\n \nde Reves : But my dear fellow , you do n't understand at all .\nI merely said that abstract things are to a poet as near and real and visible as one of your bookmakers or barmaids .\n \nPrattle : I know .\nYou take a rest .\n \nde Reves : Well , perhaps I will .\nI 'd come with you to that musical comedy you 're going to see , only I 'm a bit tired after writing this ; it 's a tedious job .\nI 'll come another night .\n \nPrattle : How do you know I 'm going to see a musical comedy ?\n \nde Reves : Well , where would you go ?\nHamlet 's 8 on at the Lord Chamberlain 's .\nYou 're not going there .\n \nPrattle : Do I look like it ?\n \nde Reves : No .\n \nPrattle : Well , you 're quite right .\nI 'm going to see `` The Girl from Bedlam . ''\nSo long .\nI must push off now .\nIt 's getting late .\nYou take a rest .\nDo n't add another line to that sonnet ; fourteen 's quite enough .\nYou take a rest .\nDo n't have any dinner to-night , just rest .\nI was like that once myself .\nSo long .\n \nde Reves : So long .\n \n[ Exit Prattle .\nde Reves returns to his table and sits down .\n \nGood old Dick !\nHe 's the same as ever .\nLord , how time passes .\n \nHe takes his pen and his sonnet and makes a few alterations .\n \nWell , that 's finished .\nI ca n't do any more to it .\n \n[ He rises and goes to the screen ; he draws back part of it and goes up to the altar .\nHe is about to place his sonnet reverently at the foot of the altar amongst his other verses .\n \nNo , I will not put it there .\nThis one is worthy of the altar .\n \n[ He places the sonnet upon the altar itself .\n \nIf that sonnet does not give me fame , nothing that I have done before will give it to me , nothing that I ever will do .\n \n[ He replaces the screen and returns to his chair at the table .\nTwilight is coming on .\nHe sits with his elbow on the table , his head on his hand , or however the actor pleases .\n \nWell , well .\nFancy seeing Dick again .\nWell , Dick enjoys his life , so he 's no fool .\nWhat was that he said ?\n`` There 's no money in poetry .\nYou 'd better chuck it . ''\nTen years ' work and what have I to show for it ?\nThe admiration of men who care for poetry , and how many of them are there ?\nThere 's a bigger demand for smoked glasses to look at eclipses of the sun .\nWhy should Fame come to me ?\nHave n't I given up my days for her ?\nThat is enough to keep her away .\nI am a poet ; that is enough reason for her to slight me .\nProud and aloof and cold as marble , what does Fame care for us ?\nYes , Dick is right .\nIt 's a poor game chasing illusions , hunting the intangible , pursuing dreams .\nDreams ?\nWhy , we are ourselves dreams .\n \n[ He leans back in his chair .\n \nWe are such stuff As dreams are made on , and our little life Is rounded with a sleep .\n[ He is silent for a while .\nSuddenly he lifts his head .\n \nMy room at Eton , Dick said .\nAn untidy mess .\n \n[ As he lifts his head and says these words , twilight gives place to broad daylight , merely as a hint that the author of the play may have been mistaken , and the whole thing may have been no more than a poet 's dream .\n \nSo it was , and it 's an untidy mess there ( looking at screen ) too .\nDick 's right .\nI 'll tidy it up .\nI 'll burn the whole damned heap ,\n \n[ He advances impetuously towards the screen .\n \nevery damned poem that I was ever fool enough to waste my time on .\n \n[ He pushes back the screen .\nFame in a Greek dress with a long golden trumpet in her hand is seen standing motionless on the altar like a marble goddess .\n \nSo ... you have come !\n \n[ For a while he stands thunderstruck .\nThen he approaches the altar .\n \nDivine fair lady , you have come .\n \n[ He holds up his hand to her and leads her down from the altar and into the centre of the stage .\nAt whatever moment the actor finds it most convenient , he repossesses himself of the sonnet that he had placed on the altar .\nHe now offers it to Fame .\n \nThis is my sonnet .\nIs it well done ?\n \n[ Fame takes it and reads it in silence , while the Poet watches her rapturously .\n \nFame : You 're a bit of all right .\n \nde Reves : What ?\n \nFame : Some poet .\n \nde Reves : I - I - scarcely ... understand .\n \nFame : You 're IT .\n \nde Reves : But ... it is not possible ... are you she that knew Homer ?\n \nFame : Homer ?\nLord , yes .\nBlind old bat , ' e could n't see a yard .\n \nde Reves : O Heavens !\n \n[ Fame walks beautifully to the window .\nShe opens it and puts her head out .\n \nFame ( in a voice with which a woman in an upper storey would cry for help if the house was well alight ) : Hi !\nHi !\nBoys !\nHi !\nSay , folks !\nHi !\n \n[ The murmur of a gathering crowd is heard .\nFame blows her trumpet .\n \nFame : Hi , he 's a poet !\n( Quickly , over her shoulder . )\nWhat 's your name ?\n \nde Reves : De Reves .\n \nFame : His name 's de Reves .\n \nde Reves : Harry de Reves .\n \nFame : His pals call him Harry .\n \nThe Crowd : Hooray !\nHooray !\nHooray !\n \nFame : Say , what 's your favourite colour ?\n \nde Reves : I ... I ... I do n't quite understand .\n \nFame : Well , which do you like best , green or blue ?\n \nde Reves : Oh - er - blue .\n \n[ She blows her trumpet out of the window .\n \nNo - er - I think green .\n \nFame : Green is his favourite colour .\n \nThe Crowd : Hooray !\nHooray !\nHooray !\n \nFame : ` Ere , tell us something .\nThey want to know all about yer .\n \nde Reves : Would n't 9 you perhaps ... would they care to hear my sonnet , if you would - er ...\n \nFame ( picking up quill ) : Here , what 's this ?\n \nde Reves : Oh , that 's my pen .\n \nFame ( after another blast on her trumpet ) : He writes with a quill .\n \n[ Cheers from the Crowd .\n \nFame ( going to a cupboard ) : Here , what have you got in here ?\n \nde Reves : Oh ... er ... those are my breakfast things .\n \nFame ( finding a dirty plate ) : What have yer had on this one ?\n \nde Reves ( mournfully ) : Oh , eggs and bacon .\n \nFame ( at the window ) : He has eggs and bacon for breakfast .\n \nThe Crowd : Hip hip hip , hooray !\nHip hip hip , hooray !\nHip hip hip , hooray !\nFame : Hi , and what 's this ?\n \nde Reves ( miserably ) : Oh , a golf stick .\n \nFame : He 's a man 's man !\nHe 's a virile man !\nHe 's a manly man !\n \n[ Wild cheers from the Crowd , this time only from women 's voices .\n \nde Reves : Oh , this is terrible .\nThis is terrible .\nThis is terrible .\n \n[ Fame gives another peal on her horn .\nShe is about to speak .\n \nde Reves ( solemnly and mournfully ) : One moment , one moment ...\n \nFame : Well , out with it .\n \nde Reves : For ten years , divine lady , I have worshipped you , offering all my songs ... I find ... I find I am not worthy ...\n \nFame : Oh , you 're all right .\n \nde Reves : No , no , I am not worthy .\nIt can not be .\nIt can not possibly be .\nOthers deserve you more .\nI must say it !\nI can not possibly love you .\nOthers are worthy .\nYou will find others .\nBut I , no , no , no .\nIt can not be .\nIt can not be .\nOh , pardon me , but it must not .\n \n[ Meanwhile Fame has been lighting one of his cigarettes .\nShe sits in a comfortable chair , leans right back , and puts her feet right up on the table amongst the poet 's papers .\n \nOh , I fear I offend you .\nBut - it can not be .\n \nFame : Oh , that 's all right , old bird ; no offence .\nI ai n't going to leave you .\n \nde Reves : But - but - but - I do not understand .\n \nFame : I 've come to stay , I have .\n \n[ She blows a puff of smoke through her trumpet .\n \nCURTAIN .\n", 'genre': 'Drama', 'subgenre': 'drama', 'year': '1919', 'quarter_cent': '1900-1924', 'decade': '1910s', 'title': 'Fame and the poet', 'author': 'Dunsany [Edward John Moreton Drax Plunkett]', 'notes': '', 'comments': 'selected from larger file', 'period': '1850-1920', 'id': '317'} ``` A `pos` sample looks as follows: ``` {'text': ['FAME', 'AND', 'THE', 'POET', 'DRAMATIS', 'PERSONAE�', 'Harry', 'de', 'Reves', ',', 'a', 'Poet', '.', '(', 'This', 'name', ',', 'though', 'of', 'course', 'of', 'French', 'origin', ',', 'has', 'become', 'anglicised', 'and', 'is', 'pronounced', 'de', 'Reevs', '.', ')', 'Dick', 'Prattle', ',', 'a', 'Lieutenant-Major', 'of', 'the', 'Royal', 'Horse', 'Marines', '.', 'Fame', '.', 'Scene', 'The', 'Poet', "'s", 'rooms', 'in', 'London', '.', 'Windows', 'in', 'back', '.', 'A', 'high', 'screen', 'in', 'a', 'corner', '.', 'Time', ':', 'February', '30th', '.', 'The', 'Poet', 'is', 'sitting', 'at', 'a', 'table', 'writing', '.', '[', 'Enter', 'Dick', 'Prattle', '.', 'Prattle', ':', 'Hullo', ',', 'Harry', '.', 'de', 'Reves', ':', 'Hullo', ',', 'Dick', '.', 'Good', 'Lord', ',', 'where', 'are', 'you', 'from', '?', 'Prattle', '(', 'casually', ')', ':', 'The', 'ends', 'of', 'the', 'earth', '.', 'de', 'Reves', ':', 'Well', ',', 'I', "'m", 'damned', '!', 'Prattle', ':', 'Thought', 'I', "'d", 'drop', 'in', 'and', 'see', 'how', 'you', 'were', 'getting', 'on', '.', 'de', 'Reves', ':', 'Well', ',', 'that', "'s", 'splendid', '.', 'What', 'are', 'you', 'doing', 'in', 'London', '?', 'Prattle', ':', 'Well', ',', 'I', 'wanted', 'to', 'see', 'if', 'I', 'could', 'get', 'one', 'or', 'two', 'decent', 'ties', 'to', 'wear', '-', 'you', 'can', 'get', 'nothing', 'out', 'there', '-', 'then', 'I', 'thought', 'I', "'d", 'have', 'a', 'look', 'and', 'see', 'how', 'London', 'was', 'getting', 'on', '.', 'de', 'Reves', ':', 'Splendid', '!', 'How', "'s", 'everybody', '?', 'Prattle', ':', 'All', 'going', 'strong', '.', 'de', 'Reves', ':', 'That', "'s", 'good', '.', 'Prattle', '(', 'seeing', 'paper', 'and', 'ink', ')', ':', 'But', 'what', 'are', 'you', 'doing', '?', 'de', 'Reves', ':', 'Writing', '.', 'Prattle', ':', 'Writing', '?', 'I', 'did', "n't", 'know', 'you', 'wrote', '.', 'de', 'Reves', ':', 'Yes', ',', 'I', "'ve", 'taken', 'to', 'it', 'rather', '.', 'Prattle', ':', 'I', 'say', '-', 'writing', "'s", 'no', 'good', '.', 'What', 'do', 'you', 'write', '?', 'de', 'Reves', ':', 'Oh', ',', 'poetry', '.', 'Prattle', ':', 'Poetry', '!', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'Yes', ',', 'that', 'sort', 'of', 'thing', ',', 'you', 'know', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'Do', 'you', 'make', 'any', 'money', 'by', 'it', '?', 'de', 'Reves', ':', 'No', '.', 'Hardly', 'any', '.', 'Prattle', ':', 'I', 'say', '-', 'why', 'do', "n't", 'you', 'chuck', 'it', '?', 'de', 'Reves', ':', 'Oh', ',', 'I', 'do', "n't", 'know', '.', 'Some', 'people', 'seem', 'to', 'like', 'my', 'stuff', ',', 'rather', '.', 'That', "'s", 'why', 'I', 'go', 'on', '.', 'Prattle', ':', 'I', "'d", 'chuck', 'it', 'if', 'there', "'s", 'no', 'money', 'in', 'it', '.', 'de', 'Reves', ':', 'Ah', ',', 'but', 'then', 'it', "'s", 'hardly', 'in', 'your', 'line', ',', 'is', 'it', '?', 'You', "'d", 'hardly', 'approve', 'of', 'poetry', 'if', 'there', 'was', 'money', 'in', 'it', '.', 'Prattle', ':', 'Oh', ',', 'I', 'do', "n't", 'say', 'that', '.', 'If', 'I', 'could', 'make', 'as', 'much', 'by', 'poetry', 'as', 'I', 'can', 'by', 'betting', 'I', 'do', "n't", 'say', 'I', 'would', "n't", 'try', 'the', 'poetry', 'touch', ',', 'only', '-', '-', 'de', 'Reves', ':', 'Only', 'what', '?', 'Prattle', ':', 'Oh', ',', 'I', 'do', "n't", 'know', '.', 'Only', 'there', 'seems', 'more', 'sense', 'in', 'betting', ',', 'somehow', '.', 'de', 'Reves', ':', 'Well', ',', 'yes', '.', 'I', 'suppose', 'it', "'s", 'easier', 'to', 'tell', 'what', 'an', 'earthly', 'horse', 'is', 'going', 'to', 'do', ',', 'than', 'to', 'tell', 'what', 'Pegasus', '-', '-', 'Prattle', ':', 'What', "'s", 'Pegasus', '?', 'de', 'Reves', ':', 'Oh', ',', 'the', 'winged', 'horse', 'of', 'poets', '.', 'Prattle', ':', 'I', 'say', '!', 'You', 'do', "n't", 'believe', 'in', 'a', 'winged', 'horse', ',', 'do', 'you', '?', 'de', 'Reves', ':', 'In', 'our', 'trade', 'we', 'believe', 'in', 'all', 'fabulous', 'things', '.', 'They', 'all', 'represent', 'some', 'large', 'truth', 'to', 'us', '.', 'An', 'emblem', 'like', 'Pegasus', 'is', 'as', 'real', 'a', 'thing', 'to', 'a', 'poet', 'as', 'a', 'Derby', 'winner', 'would', 'be', 'to', 'you', '.', 'Prattle', ':', 'I', 'say', '.', '(', 'Give', 'me', 'a', 'cigarette', '.', 'Thanks', '.', ')', 'What', '?', 'Then', 'you', "'d", 'believe', 'in', 'nymphs', 'and', 'fauns', ',', 'and', 'Pan', ',', 'and', 'all', 'those', 'kind', 'of', 'birds', '?', 'de', 'Reves', ':', 'Yes', '.', 'Yes', '.', 'In', 'all', 'of', 'them', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'You', 'believe', 'in', 'the', 'Lord', 'Mayor', 'of', 'London', ',', 'do', "n't", 'you', '?', 'Prattle', ':', 'Yes', ',', 'of', 'course', ';', 'but', 'what', 'has', '-', '-', 'de', 'Reves', ':', 'Four', 'million', 'people', 'or', 'so', 'made', 'him', 'Lord', 'Mayor', ',', 'did', "n't", 'they', '?', 'And', 'he', 'represents', 'to', 'them', 'the', 'wealth', 'and', 'dignity', 'and', 'tradition', 'of', '-', '-', 'Prattle', ':', 'Yes', ';', 'but', ',', 'I', 'say', ',', 'what', 'has', 'all', 'this', '-', '-', 'de', 'Reves', ':', 'Well', ',', 'he', 'stands', 'for', 'an', 'idea', 'to', 'them', ',', 'and', 'they', 'made', 'him', 'Lord', 'Mayor', ',', 'and', 'so', 'he', 'is', 'one', '...', 'Prattle', ':', 'Well', ',', 'of', 'course', 'he', 'is', '.', 'de', 'Reves', ':', 'In', 'the', 'same', 'way', 'Pan', 'has', 'been', 'made', 'what', 'he', 'is', 'by', 'millions', ';', 'by', 'millions', 'to', 'whom', 'he', 'represents', 'world-old', 'traditions', '.', 'Prattle', '(', 'rising', 'from', 'his', 'chair', 'and', 'stepping', 'backwards', ',', 'laughing', 'and', 'looking', 'at', 'the', 'Poet', 'in', 'a', 'kind', 'of', 'assumed', 'wonder', ')', ':', 'I', 'say', '...', 'I', 'say', '...', 'You', 'old', 'heathen', '...', 'but', 'Good', 'Lord', '...', '[', 'He', 'bumps', 'into', 'the', 'high', 'screen', 'behind', ',', 'pushing', 'it', 'back', 'a', 'little', '.', 'de', 'Reves', ':', 'Look', 'out', '!', 'Look', 'out', '!', 'Prattle', ':', 'What', '?', 'What', "'s", 'the', 'matter', '?', 'de', 'Reves', ':', 'The', 'screen', '!', 'Prattle', ':', 'Oh', ',', 'sorry', ',', 'yes', '.', 'I', "'ll", 'put', 'it', 'right', '.', '[', 'He', 'is', 'about', 'to', 'go', 'round', 'behind', 'it', '.', 'de', 'Reves', ':', 'No', ',', 'do', "n't", 'go', 'round', 'there', '.', 'Prattle', ':', 'What', '?', 'Why', 'not', '?', 'de', 'Reves', ':', 'Oh', ',', 'you', 'would', "n't", 'understand', '.', 'Prattle', ':', 'Would', "n't", 'understand', '?', 'Why', ',', 'what', 'have', 'you', 'got', '?', 'de', 'Reves', ':', 'Oh', ',', 'one', 'of', 'those', 'things', '...', 'You', 'would', "n't", 'understand', '.', 'Prattle', ':', 'Of', 'course', 'I', "'d", 'understand', '.', 'Let', "'s", 'have', 'a', 'look', '.', '[', 'The', 'Poet', 'walks', 'towards', 'Prattle', 'and', 'the', 'screen', '.', 'He', 'protests', 'no', 'further', '.', 'Prattle', 'looks', 'round', 'the', 'corner', 'of', 'the', 'screen', '.', 'An', 'altar', '.', 'de', 'Reves', '(', 'removing', 'the', 'screen', 'altogether', ')', ':', 'That', 'is', 'all', '.', 'What', 'do', 'you', 'make', 'of', 'it', '?', '[', 'An', 'altar', 'of', 'Greek', 'design', ',', 'shaped', 'like', 'a', 'pedestal', ',', 'is', 'revealed', '.', 'Papers', 'litter', 'the', 'floor', 'all', 'about', 'it', '.', 'Prattle', ':', 'I', 'say', '-', 'you', 'always', 'were', 'an', 'untidy', 'devil', '.', 'de', 'Reves', ':', 'Well', ',', 'what', 'do', 'you', 'make', 'of', 'it', '?', 'Prattle', ':', 'It', 'reminds', 'me', 'of', 'your', 'room', 'at', 'Eton', '.', 'de', 'Reves', ':', 'My', 'room', 'at', 'Eton', '?', 'Prattle', ':', 'Yes', ',', 'you', 'always', 'had', 'papers', 'all', 'over', 'your', 'floor', '.', 'de', 'Reves', ':', 'Oh', ',', 'yes', '-', '-', 'Prattle', ':', 'And', 'what', 'are', 'these', '?', 'de', 'Reves', ':', 'All', 'these', 'are', 'poems', ';', 'and', 'this', 'is', 'my', 'altar', 'to', 'Fame', '.', 'Prattle', ':', 'To', 'Fame', '?', 'de', 'Reves', ':', 'The', 'same', 'that', 'Homer', 'knew', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'Keats', 'never', 'saw', 'her', '.', 'Shelley', 'died', 'too', 'young', '.', 'She', 'came', 'late', 'at', 'the', 'best', 'of', 'times', ',', 'now', 'scarcely', 'ever', '.', 'Prattle', ':', 'But', ',', 'my', 'dear', 'fellow', ',', 'you', 'do', "n't", 'mean', 'that', 'you', 'think', 'there', 'really', 'is', 'such', 'a', 'person', '?', 'de', 'Reves', ':', 'I', 'offer', 'all', 'my', 'songs', 'to', 'her', '.', 'Prattle', ':', 'But', 'you', 'do', "n't", 'mean', 'you', 'think', 'you', 'could', 'actually', 'see', 'Fame', '?', 'de', 'Reves', ':', 'We', 'poets', 'personify', 'abstract', 'things', ',', 'and', 'not', 'poets', 'only', 'but', 'sculptors7', 'and', 'painters', 'too', '.', 'All', 'the', 'great', 'things', 'of', 'the', 'world', 'are', 'those', 'abstract', 'things', '.', 'Prattle', ':', 'But', 'what', 'I', 'mean', 'is', ',', 'they', "'re", 'not', 'really', 'there', ',', 'like', 'you', 'or', 'me', '.', 'de', 'Reves', ':', 'To', 'us', 'these', 'things', 'are', 'more', 'real', 'than', 'men', ',', 'they', 'outlive', 'generations', ',', 'they', 'watch', 'the', 'passing', 'of', 'kingdoms', ':', 'we', 'go', 'by', 'them', 'like', 'dust', ';', 'they', 'are', 'still', 'there', ',', 'unmoved', ',', 'unsmiling', '.', 'Prattle', ':', 'But', ',', 'but', ',', 'you', 'ca', "n't", 'think', 'that', 'you', 'could', 'see', 'Fame', ',', 'you', 'do', "n't", 'expect', 'to', 'see', 'it', '?', 'de', 'Reves', ':', 'Not', 'to', 'me', '.', 'Never', 'to', 'me', '.', 'She', 'of', 'the', 'golden', 'trumpet', 'and', 'Greek', 'dress', 'will', 'never', 'appear', 'to', 'me', '...', 'We', 'all', 'have', 'our', 'dreams', '.', 'Prattle', ':', 'I', 'say', '-', 'what', 'have', 'you', 'been', 'doing', 'all', 'day', '?', 'de', 'Reves', ':', 'I', '?', 'Oh', ',', 'only', 'writing', 'a', 'sonnet', '.', 'Prattle', ':', 'Is', 'it', 'a', 'long', 'one', '?', 'de', 'Reves', ':', 'Not', 'very', '.', 'Prattle', ':', 'About', 'how', 'long', 'is', 'it', '?', 'de', 'Reves', ':', 'About', 'fourteen', 'lines', '.', 'Prattle', '(', 'impressively', ')', ':', 'I', 'tell', 'you', 'what', 'it', 'is', '.', 'de', 'Reves', ':', 'Yes', '?', 'Prattle', ':', 'I', 'tell', 'you', 'what', '.', 'You', "'ve", 'been', 'overworking', 'yourself', '.', 'I', 'once', 'got', 'like', 'that', 'on', 'board', 'the', 'Sandhurst', ',', 'working', 'for', 'the', 'passing-out', 'exam', '.', 'I', 'got', 'so', 'bad', 'that', 'I', 'could', 'have', 'seen', 'anything', '.', 'de', 'Reves', ':', 'Seen', 'anything', '?', 'Prattle', ':', 'Lord', ',', 'yes', ';', 'horned', 'pigs', ',', 'snakes', 'with', 'wings', ';', 'anything', ';', 'one', 'of', 'your', 'winged', 'horses', 'even', '.', 'They', 'gave', 'me', 'some', 'stuff', 'called', 'bromide', 'for', 'it', '.', 'You', 'take', 'a', 'rest', '.', 'de', 'Reves', ':', 'But', 'my', 'dear', 'fellow', ',', 'you', 'do', "n't", 'understand', 'at', 'all', '.', 'I', 'merely', 'said', 'that', 'abstract', 'things', 'are', 'to', 'a', 'poet', 'as', 'near', 'and', 'real', 'and', 'visible', 'as', 'one', 'of', 'your', 'bookmakers', 'or', 'barmaids', '.', 'Prattle', ':', 'I', 'know', '.', 'You', 'take', 'a', 'rest', '.', 'de', 'Reves', ':', 'Well', ',', 'perhaps', 'I', 'will', '.', 'I', "'d", 'come', 'with', 'you', 'to', 'that', 'musical', 'comedy', 'you', "'re", 'going', 'to', 'see', ',', 'only', 'I', "'m", 'a', 'bit', 'tired', 'after', 'writing', 'this', ';', 'it', "'s", 'a', 'tedious', 'job', '.', 'I', "'ll", 'come', 'another', 'night', '.', 'Prattle', ':', 'How', 'do', 'you', 'know', 'I', "'m", 'going', 'to', 'see', 'a', 'musical', 'comedy', '?', 'de', 'Reves', ':', 'Well', ',', 'where', 'would', 'you', 'go', '?', 'Hamlet', "'s", '8', 'on', 'at', 'the', 'Lord', 'Chamberlain', "'s", '.', 'You', "'re", 'not', 'going', 'there', '.', 'Prattle', ':', 'Do', 'I', 'look', 'like', 'it', '?', 'de', 'Reves', ':', 'No', '.', 'Prattle', ':', 'Well', ',', 'you', "'re", 'quite', 'right', '.', 'I', "'m", 'going', 'to', 'see', '``', 'The', 'Girl', 'from', 'Bedlam', '.', "''", 'So', 'long', '.', 'I', 'must', 'push', 'off', 'now', '.', 'It', "'s", 'getting', 'late', '.', 'You', 'take', 'a', 'rest', '.', 'Do', "n't", 'add', 'another', 'line', 'to', 'that', 'sonnet', ';', 'fourteen', "'s", 'quite', 'enough', '.', 'You', 'take', 'a', 'rest', '.', 'Do', "n't", 'have', 'any', 'dinner', 'to-night', ',', 'just', 'rest', '.', 'I', 'was', 'like', 'that', 'once', 'myself', '.', 'So', 'long', '.', 'de', 'Reves', ':', 'So', 'long', '.', '[', 'Exit', 'Prattle', '.', 'de', 'Reves', 'returns', 'to', 'his', 'table', 'and', 'sits', 'down', '.', 'Good', 'old', 'Dick', '!', 'He', "'s", 'the', 'same', 'as', 'ever', '.', 'Lord', ',', 'how', 'time', 'passes', '.', 'He', 'takes', 'his', 'pen', 'and', 'his', 'sonnet', 'and', 'makes', 'a', 'few', 'alterations', '.', 'Well', ',', 'that', "'s", 'finished', '.', 'I', 'ca', "n't", 'do', 'any', 'more', 'to', 'it', '.', '[', 'He', 'rises', 'and', 'goes', 'to', 'the', 'screen', ';', 'he', 'draws', 'back', 'part', 'of', 'it', 'and', 'goes', 'up', 'to', 'the', 'altar', '.', 'He', 'is', 'about', 'to', 'place', 'his', 'sonnet', 'reverently', 'at', 'the', 'foot', 'of', 'the', 'altar', 'amongst', 'his', 'other', 'verses', '.', 'No', ',', 'I', 'will', 'not', 'put', 'it', 'there', '.', 'This', 'one', 'is', 'worthy', 'of', 'the', 'altar', '.', '[', 'He', 'places', 'the', 'sonnet', 'upon', 'the', 'altar', 'itself', '.', 'If', 'that', 'sonnet', 'does', 'not', 'give', 'me', 'fame', ',', 'nothing', 'that', 'I', 'have', 'done', 'before', 'will', 'give', 'it', 'to', 'me', ',', 'nothing', 'that', 'I', 'ever', 'will', 'do', '.', '[', 'He', 'replaces', 'the', 'screen', 'and', 'returns', 'to', 'his', 'chair', 'at', 'the', 'table', '.', 'Twilight', 'is', 'coming', 'on', '.', 'He', 'sits', 'with', 'his', 'elbow', 'on', 'the', 'table', ',', 'his', 'head', 'on', 'his', 'hand', ',', 'or', 'however', 'the', 'actor', 'pleases', '.', 'Well', ',', 'well', '.', 'Fancy', 'seeing', 'Dick', 'again', '.', 'Well', ',', 'Dick', 'enjoys', 'his', 'life', ',', 'so', 'he', "'s", 'no', 'fool', '.', 'What', 'was', 'that', 'he', 'said', '?', '``', 'There', "'s", 'no', 'money', 'in', 'poetry', '.', 'You', "'d", 'better', 'chuck', 'it', '.', "''", 'Ten', 'years', "'", 'work', 'and', 'what', 'have', 'I', 'to', 'show', 'for', 'it', '?', 'The', 'admiration', 'of', 'men', 'who', 'care', 'for', 'poetry', ',', 'and', 'how', 'many', 'of', 'them', 'are', 'there', '?', 'There', "'s", 'a', 'bigger', 'demand', 'for', 'smoked', 'glasses', 'to', 'look', 'at', 'eclipses', 'of', 'the', 'sun', '.', 'Why', 'should', 'Fame', 'come', 'to', 'me', '?', 'Have', "n't", 'I', 'given', 'up', 'my', 'days', 'for', 'her', '?', 'That', 'is', 'enough', 'to', 'keep', 'her', 'away', '.', 'I', 'am', 'a', 'poet', ';', 'that', 'is', 'enough', 'reason', 'for', 'her', 'to', 'slight', 'me', '.', 'Proud', 'and', 'aloof', 'and', 'cold', 'as', 'marble', ',', 'what', 'does', 'Fame', 'care', 'for', 'us', '?', 'Yes', ',', 'Dick', 'is', 'right', '.', 'It', "'s", 'a', 'poor', 'game', 'chasing', 'illusions', ',', 'hunting', 'the', 'intangible', ',', 'pursuing', 'dreams', '.', 'Dreams', '?', 'Why', ',', 'we', 'are', 'ourselves', 'dreams', '.', '[', 'He', 'leans', 'back', 'in', 'his', 'chair', '.', 'We', 'are', 'such', 'stuff', 'As', 'dreams', 'are', 'made', 'on', ',', 'and', 'our', 'little', 'life', 'Is', 'rounded', 'with', 'a', 'sleep', '.', '[', 'He', 'is', 'silent', 'for', 'a', 'while', '.', 'Suddenly', 'he', 'lifts', 'his', 'head', '.', 'My', 'room', 'at', 'Eton', ',', 'Dick', 'said', '.', 'An', 'untidy', 'mess', '.', '[', 'As', 'he', 'lifts', 'his', 'head', 'and', 'says', 'these', 'words', ',', 'twilight', 'gives', 'place', 'to', 'broad', 'daylight', ',', 'merely', 'as', 'a', 'hint', 'that', 'the', 'author', 'of', 'the', 'play', 'may', 'have', 'been', 'mistaken', ',', 'and', 'the', 'whole', 'thing', 'may', 'have', 'been', 'no', 'more', 'than', 'a', 'poet', "'s", 'dream', '.', 'So', 'it', 'was', ',', 'and', 'it', "'s", 'an', 'untidy', 'mess', 'there', '(', 'looking', 'at', 'screen', ')', 'too', '.', 'Dick', "'s", 'right', '.', 'I', "'ll", 'tidy', 'it', 'up', '.', 'I', "'ll", 'burn', 'the', 'whole', 'damned', 'heap', ',', '[', 'He', 'advances', 'impetuously', 'towards', 'the', 'screen', '.', 'every', 'damned', 'poem', 'that', 'I', 'was', 'ever', 'fool', 'enough', 'to', 'waste', 'my', 'time', 'on', '.', '[', 'He', 'pushes', 'back', 'the', 'screen', '.', 'Fame', 'in', 'a', 'Greek', 'dress', 'with', 'a', 'long', 'golden', 'trumpet', 'in', 'her', 'hand', 'is', 'seen', 'standing', 'motionless', 'on', 'the', 'altar', 'like', 'a', 'marble', 'goddess', '.', 'So', '...', 'you', 'have', 'come', '!', '[', 'For', 'a', 'while', 'he', 'stands', 'thunderstruck', '.', 'Then', 'he', 'approaches', 'the', 'altar', '.', 'Divine', 'fair', 'lady', ',', 'you', 'have', 'come', '.', '[', 'He', 'holds', 'up', 'his', 'hand', 'to', 'her', 'and', 'leads', 'her', 'down', 'from', 'the', 'altar', 'and', 'into', 'the', 'centre', 'of', 'the', 'stage', '.', 'At', 'whatever', 'moment', 'the', 'actor', 'finds', 'it', 'most', 'convenient', ',', 'he', 'repossesses', 'himself', 'of', 'the', 'sonnet', 'that', 'he', 'had', 'placed', 'on', 'the', 'altar', '.', 'He', 'now', 'offers', 'it', 'to', 'Fame', '.', 'This', 'is', 'my', 'sonnet', '.', 'Is', 'it', 'well', 'done', '?', '[', 'Fame', 'takes', 'it', 'and', 'reads', 'it', 'in', 'silence', ',', 'while', 'the', 'Poet', 'watches', 'her', 'rapturously', '.', 'Fame', ':', 'You', "'re", 'a', 'bit', 'of', 'all', 'right', '.', 'de', 'Reves', ':', 'What', '?', 'Fame', ':', 'Some', 'poet', '.', 'de', 'Reves', ':', 'I', '-', 'I', '-', 'scarcely', '...', 'understand', '.', 'Fame', ':', 'You', "'re", 'IT', '.', 'de', 'Reves', ':', 'But', '...', 'it', 'is', 'not', 'possible', '...', 'are', 'you', 'she', 'that', 'knew', 'Homer', '?', 'Fame', ':', 'Homer', '?', 'Lord', ',', 'yes', '.', 'Blind', 'old', 'bat', ',', "'", 'e', 'could', "n't", 'see', 'a', 'yard', '.', 'de', 'Reves', ':', 'O', 'Heavens', '!', '[', 'Fame', 'walks', 'beautifully', 'to', 'the', 'window', '.', 'She', 'opens', 'it', 'and', 'puts', 'her', 'head', 'out', '.', 'Fame', '(', 'in', 'a', 'voice', 'with', 'which', 'a', 'woman', 'in', 'an', 'upper', 'storey', 'would', 'cry', 'for', 'help', 'if', 'the', 'house', 'was', 'well', 'alight', ')', ':', 'Hi', '!', 'Hi', '!', 'Boys', '!', 'Hi', '!', 'Say', ',', 'folks', '!', 'Hi', '!', '[', 'The', 'murmur', 'of', 'a', 'gathering', 'crowd', 'is', 'heard', '.', 'Fame', 'blows', 'her', 'trumpet', '.', 'Fame', ':', 'Hi', ',', 'he', "'s", 'a', 'poet', '!', '(', 'Quickly', ',', 'over', 'her', 'shoulder', '.', ')', 'What', "'s", 'your', 'name', '?', 'de', 'Reves', ':', 'De', 'Reves', '.', 'Fame', ':', 'His', 'name', "'s", 'de', 'Reves', '.', 'de', 'Reves', ':', 'Harry', 'de', 'Reves', '.', 'Fame', ':', 'His', 'pals', 'call', 'him', 'Harry', '.', 'The', 'Crowd', ':', 'Hooray', '!', 'Hooray', '!', 'Hooray', '!', 'Fame', ':', 'Say', ',', 'what', "'s", 'your', 'favourite', 'colour', '?', 'de', 'Reves', ':', 'I', '...', 'I', '...', 'I', 'do', "n't", 'quite', 'understand', '.', 'Fame', ':', 'Well', ',', 'which', 'do', 'you', 'like', 'best', ',', 'green', 'or', 'blue', '?', 'de', 'Reves', ':', 'Oh', '-', 'er', '-', 'blue', '.', '[', 'She', 'blows', 'her', 'trumpet', 'out', 'of', 'the', 'window', '.', 'No', '-', 'er', '-', 'I', 'think', 'green', '.', 'Fame', ':', 'Green', 'is', 'his', 'favourite', 'colour', '.', 'The', 'Crowd', ':', 'Hooray', '!', 'Hooray', '!', 'Hooray', '!', 'Fame', ':', '`', 'Ere', ',', 'tell', 'us', 'something', '.', 'They', 'want', 'to', 'know', 'all', 'about', 'yer', '.', 'de', 'Reves', ':', 'Would', "n't", '9', 'you', 'perhaps', '...', 'would', 'they', 'care', 'to', 'hear', 'my', 'sonnet', ',', 'if', 'you', 'would', '-', 'er', '...', 'Fame', '(', 'picking', 'up', 'quill', ')', ':', 'Here', ',', 'what', "'s", 'this', '?', 'de', 'Reves', ':', 'Oh', ',', 'that', "'s", 'my', 'pen', '.', 'Fame', '(', 'after', 'another', 'blast', 'on', 'her', 'trumpet', ')', ':', 'He', 'writes', 'with', 'a', 'quill', '.', '[', 'Cheers', 'from', 'the', 'Crowd', '.', 'Fame', '(', 'going', 'to', 'a', 'cupboard', ')', ':', 'Here', ',', 'what', 'have', 'you', 'got', 'in', 'here', '?', 'de', 'Reves', ':', 'Oh', '...', 'er', '...', 'those', 'are', 'my', 'breakfast', 'things', '.', 'Fame', '(', 'finding', 'a', 'dirty', 'plate', ')', ':', 'What', 'have', 'yer', 'had', 'on', 'this', 'one', '?', 'de', 'Reves', '(', 'mournfully', ')', ':', 'Oh', ',', 'eggs', 'and', 'bacon', '.', 'Fame', '(', 'at', 'the', 'window', ')', ':', 'He', 'has', 'eggs', 'and', 'bacon', 'for', 'breakfast', '.', 'The', 'Crowd', ':', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Fame', ':', 'Hi', ',', 'and', 'what', "'s", 'this', '?', 'de', 'Reves', '(', 'miserably', ')', ':', 'Oh', ',', 'a', 'golf', 'stick', '.', 'Fame', ':', 'He', "'s", 'a', 'man', "'s", 'man', '!', 'He', "'s", 'a', 'virile', 'man', '!', 'He', "'s", 'a', 'manly', 'man', '!', '[', 'Wild', 'cheers', 'from', 'the', 'Crowd', ',', 'this', 'time', 'only', 'from', 'women', "'s", 'voices', '.', 'de', 'Reves', ':', 'Oh', ',', 'this', 'is', 'terrible', '.', 'This', 'is', 'terrible', '.', 'This', 'is', 'terrible', '.', '[', 'Fame', 'gives', 'another', 'peal', 'on', 'her', 'horn', '.', 'She', 'is', 'about', 'to', 'speak', '.', 'de', 'Reves', '(', 'solemnly', 'and', 'mournfully', ')', ':', 'One', 'moment', ',', 'one', 'moment', '...', 'Fame', ':', 'Well', ',', 'out', 'with', 'it', '.', 'de', 'Reves', ':', 'For', 'ten', 'years', ',', 'divine', 'lady', ',', 'I', 'have', 'worshipped', 'you', ',', 'offering', 'all', 'my', 'songs', '...', 'I', 'find', '...', 'I', 'find', 'I', 'am', 'not', 'worthy', '...', 'Fame', ':', 'Oh', ',', 'you', "'re", 'all', 'right', '.', 'de', 'Reves', ':', 'No', ',', 'no', ',', 'I', 'am', 'not', 'worthy', '.', 'It', 'can', 'not', 'be', '.', 'It', 'can', 'not', 'possibly', 'be', '.', 'Others', 'deserve', 'you', 'more', '.', 'I', 'must', 'say', 'it', '!', 'I', 'can', 'not', 'possibly', 'love', 'you', '.', 'Others', 'are', 'worthy', '.', 'You', 'will', 'find', 'others', '.', 'But', 'I', ',', 'no', ',', 'no', ',', 'no', '.', 'It', 'can', 'not', 'be', '.', 'It', 'can', 'not', 'be', '.', 'Oh', ',', 'pardon', 'me', ',', 'but', 'it', 'must', 'not', '.', '[', 'Meanwhile', 'Fame', 'has', 'been', 'lighting', 'one', 'of', 'his', 'cigarettes', '.', 'She', 'sits', 'in', 'a', 'comfortable', 'chair', ',', 'leans', 'right', 'back', ',', 'and', 'puts', 'her', 'feet', 'right', 'up', 'on', 'the', 'table', 'amongst', 'the', 'poet', "'s", 'papers', '.', 'Oh', ',', 'I', 'fear', 'I', 'offend', 'you', '.', 'But', '-', 'it', 'can', 'not', 'be', '.', 'Fame', ':', 'Oh', ',', 'that', "'s", 'all', 'right', ',', 'old', 'bird', ';', 'no', 'offence', '.', 'I', 'ai', "n't", 'going', 'to', 'leave', 'you', '.', 'de', 'Reves', ':', 'But', '-', 'but', '-', 'but', '-', 'I', 'do', 'not', 'understand', '.', 'Fame', ':', 'I', "'ve", 'come', 'to', 'stay', ',', 'I', 'have', '.', '[', 'She', 'blows', 'a', 'puff', 'of', 'smoke', 'through', 'her', 'trumpet', '.', 'CURTAIN', '.'], 'pos_tags': [10, 0, 2, 12, 12, 12, 12, 12, 12, 38, 2, 12, 38, 41, 2, 10, 38, 18, 5, 10, 5, 6, 10, 38, 30, 29, 29, 0, 30, 6, 12, 12, 38, 42, 12, 12, 38, 2, 12, 5, 2, 12, 12, 13, 38, 12, 38, 10, 2, 12, 15, 11, 5, 12, 38, 11, 5, 18, 38, 2, 6, 10, 5, 2, 10, 38, 10, 38, 12, 6, 38, 2, 12, 30, 28, 5, 2, 10, 10, 38, 41, 12, 12, 12, 38, 10, 38, 12, 38, 12, 38, 12, 12, 38, 12, 38, 12, 38, 6, 12, 38, 35, 31, 16, 5, 22, 10, 41, 18, 42, 38, 2, 11, 5, 2, 10, 38, 12, 12, 38, 25, 38, 16, 31, 29, 22, 10, 38, 27, 16, 9, 26, 21, 0, 26, 35, 16, 27, 28, 5, 38, 12, 12, 38, 25, 38, 32, 30, 6, 38, 33, 31, 16, 28, 5, 12, 22, 10, 38, 18, 38, 16, 27, 24, 26, 5, 16, 9, 26, 1, 0, 1, 6, 11, 24, 26, 38, 16, 9, 26, 10, 21, 18, 38, 18, 16, 27, 16, 9, 26, 2, 10, 0, 26, 35, 12, 27, 28, 5, 38, 12, 12, 38, 6, 22, 35, 30, 10, 22, 10, 38, 2, 28, 6, 38, 12, 12, 38, 32, 30, 6, 38, 10, 41, 28, 10, 0, 10, 42, 38, 0, 33, 31, 16, 28, 22, 12, 12, 38, 28, 38, 10, 38, 28, 22, 16, 27, 36, 26, 16, 27, 38, 12, 12, 38, 25, 38, 16, 31, 29, 24, 16, 18, 38, 10, 38, 16, 31, 38, 28, 30, 18, 6, 38, 33, 31, 16, 26, 22, 12, 12, 38, 25, 38, 10, 38, 10, 38, 10, 22, 6, 12, 22, 12, 12, 38, 25, 38, 2, 10, 5, 10, 38, 16, 31, 38, 10, 38, 6, 12, 22, 26, 16, 26, 2, 10, 5, 16, 22, 12, 12, 38, 25, 38, 18, 18, 38, 10, 38, 16, 31, 38, 35, 31, 36, 16, 31, 16, 22, 12, 12, 38, 25, 38, 16, 31, 36, 26, 38, 2, 11, 31, 24, 26, 17, 10, 38, 18, 38, 2, 30, 35, 16, 31, 5, 38, 10, 38, 16, 9, 26, 16, 5, 3, 30, 2, 10, 5, 16, 38, 12, 12, 38, 25, 38, 0, 18, 16, 30, 18, 5, 17, 10, 38, 30, 16, 22, 16, 9, 18, 26, 5, 10, 5, 3, 27, 10, 5, 16, 38, 10, 38, 25, 38, 16, 31, 36, 26, 2, 38, 5, 16, 9, 26, 18, 18, 5, 10, 5, 16, 31, 5, 28, 16, 31, 36, 26, 16, 9, 36, 26, 2, 10, 10, 38, 18, 38, 38, 12, 12, 38, 18, 33, 22, 10, 38, 25, 38, 16, 31, 36, 26, 38, 18, 3, 30, 7, 10, 5, 28, 38, 18, 38, 12, 12, 38, 25, 38, 25, 38, 16, 31, 16, 30, 7, 24, 26, 33, 2, 6, 10, 30, 28, 24, 26, 38, 5, 24, 26, 33, 12, 38, 38, 10, 38, 33, 30, 12, 22, 12, 12, 38, 25, 38, 2, 29, 10, 5, 11, 38, 10, 38, 16, 31, 22, 16, 31, 36, 26, 5, 2, 29, 10, 38, 31, 16, 22, 12, 12, 38, 5, 17, 10, 16, 31, 5, 2, 6, 11, 38, 16, 18, 31, 2, 6, 10, 24, 16, 38, 2, 10, 5, 12, 30, 18, 6, 2, 10, 24, 2, 10, 5, 2, 12, 10, 9, 26, 24, 16, 38, 10, 38, 16, 31, 38, 41, 26, 16, 2, 10, 38, 11, 38, 42, 33, 22, 18, 16, 9, 26, 5, 11, 0, 11, 38, 0, 12, 38, 0, 14, 2, 10, 5, 11, 22, 12, 12, 38, 25, 38, 25, 38, 5, 2, 5, 16, 38, 10, 38, 6, 12, 22, 12, 12, 38, 16, 31, 5, 2, 12, 12, 5, 12, 38, 31, 36, 16, 22, 10, 38, 25, 38, 5, 10, 38, 0, 33, 30, 38, 38, 12, 12, 38, 1, 1, 11, 0, 18, 27, 16, 12, 12, 38, 27, 36, 16, 22, 0, 16, 30, 24, 16, 2, 10, 0, 10, 0, 10, 5, 38, 38, 10, 38, 25, 38, 0, 38, 16, 31, 38, 33, 30, 14, 2, 38, 38, 12, 12, 38, 25, 38, 16, 30, 5, 2, 10, 24, 16, 38, 0, 16, 27, 16, 12, 12, 38, 0, 18, 16, 30, 1, -1, 10, 38, 18, 38, 5, 10, 16, 30, 38, 12, 12, 38, 5, 2, 6, 10, 12, 30, 29, 29, 33, 16, 30, 5, 11, 38, 5, 11, 24, 33, 16, 30, 6, 11, 38, 10, 41, 28, 5, 17, 10, 0, 28, 18, 38, 28, 0, 28, 5, 2, 12, 5, 2, 10, 5, 6, 10, 42, 38, 16, 31, -1, 16, 31, -1, 16, 6, 11, -1, 0, 12, 12, -1, 41, 16, 30, 5, 2, 6, 10, 18, 38, 28, 16, 18, 2, 6, 38, 12, 12, 38, 31, 21, 22, 26, 21, 22, 10, 38, 33, 22, 33, 30, 2, 10, 22, 12, 12, 38, 2, 10, 22, 10, 38, 25, 38, 18, 38, 25, 38, 16, 9, 26, 16, 18, 38, 41, 16, 30, 18, 24, 26, 10, 5, 16, 38, 12, 12, 38, 25, 38, 31, 36, 26, 10, 18, 38, 10, 38, 33, 22, 35, 36, 22, 12, 12, 38, 25, 38, 16, 9, 36, 26, 38, 10, 38, 9, 36, 26, 22, 35, 38, 33, 31, 16, 27, 22, 12, 12, 38, 25, 38, 1, 5, 2, 11, -1, 16, 9, 36, 26, 38, 10, 38, 5, 10, 16, 9, 26, 38, 26, 30, 26, 2, 10, 38, 41, 12, 12, 30, 5, 12, 0, 2, 10, 38, 16, 30, 18, 7, 38, 10, 11, 31, 2, 10, 5, 2, 10, 38, 2, 10, 38, 12, 12, 41, 28, 2, 10, 18, 42, 38, 32, 30, 18, 38, 33, 31, 16, 26, 5, 16, 22, 41, 2, 10, 5, 6, 10, 38, 29, 5, 2, 10, 38, 30, 29, 38, 11, 31, 2, 10, 18, 5, 16, 38, 10, 38, 16, 31, 38, 16, 18, 27, 2, 6, 10, 38, 12, 12, 38, 25, 38, 33, 31, 16, 26, 5, 16, 22, 10, 38, 16, 30, 16, 5, 17, 10, 5, 12, 38, 12, 12, 38, 17, 10, 5, 12, 22, 10, 38, 25, 38, 16, 18, 27, 11, 18, 5, 17, 10, 38, 12, 12, 38, 25, 38, 25, 38, 38, 10, 38, 0, 33, 31, 2, 22, 12, 12, 38, 14, 2, 31, 11, 38, 0, 2, 30, 17, 10, 24, 12, 38, 10, 38, 24, 12, 22, 12, 12, 38, 2, 6, 5, 12, 27, 38, 10, 38, 6, 12, 22, 12, 12, 38, 12, 18, 27, 16, 38, 12, 27, 18, 6, 38, 16, 27, 18, 5, 2, 8, 5, 11, 38, 18, 18, 18, 38, 10, 38, 0, 38, 17, 6, 10, 38, 16, 31, 36, 26, 5, 16, 31, 3, 18, 30, 14, 2, 10, 22, 12, 12, 38, 16, 31, 14, 17, 11, 24, 16, 38, 10, 38, 0, 16, 31, 36, 26, 16, 31, 16, 9, 18, 26, 12, 22, 12, 12, 38, 16, 11, 31, 6, 11, 38, 0, 36, 11, 6, 0, 6, 0, 11, 18, 38, 14, 2, 6, 11, 5, 2, 10, 31, 2, 6, 11, 38, 10, 38, 0, 33, 16, 31, 30, 38, 16, 31, 36, 18, 18, 38, 5, 16, 0, 16, 38, 12, 12, 38, 24, 16, 2, 11, 31, 19, 6, 5, 11, 38, 16, 31, 11, 38, 16, 31, 2, 10, 5, 11, 38, 16, 31, 5, 16, 31, 10, 38, 16, 31, 18, 18, 38, 6, 38, 12, 38, 10, 38, 0, 38, 18, 38, 16, 9, 36, 26, 5, 16, 9, 26, 12, 38, 16, 31, 36, 26, 24, 26, 16, 22, 12, 12, 38, 36, 24, 16, 38, 18, 24, 16, 38, 16, 5, 2, 6, 10, 0, 6, 10, 9, 18, 26, 24, 16, -1, 16, 18, 31, 17, 11, 38, 10, 38, 16, 31, 38, 33, 31, 16, 29, 28, 2, 10, 22, 12, 12, 38, 16, 22, 25, 38, 18, 28, 2, 10, 38, 10, 38, 30, 16, 2, 6, 1, 22, 12, 12, 38, 36, 18, 38, 10, 38, 18, 35, 18, 30, 16, 22, 12, 12, 38, 5, 10, 11, 38, 10, 41, 18, 42, 38, 16, 26, 16, 33, 16, 30, 38, 12, 12, 38, 25, 22, 10, 38, 16, 26, 16, 33, 38, 16, 31, 29, 28, 16, 38, 16, 18, 27, 5, 5, 5, 10, 2, 12, 38, 28, 5, 2, 6, 10, 38, 16, 27, 18, 6, 5, 16, 9, 26, 29, 10, 38, 12, 12, 38, 29, 10, 22, 10, 38, 12, 38, 25, 38, 29, 11, 38, 11, 5, 11, 38, 10, 38, 1, 5, 17, 29, 11, 18, 38, 16, 27, 16, 2, 10, 27, 10, 5, 16, 38, 16, 31, 2, 10, 38, 12, 12, 38, 0, 17, 6, 10, 38, 16, 31, 36, 26, 5, 2, 38, 16, 18, 27, 5, 6, 11, 31, 24, 2, 10, 5, 6, 0, 6, 0, 6, 5, 1, 5, 17, 11, 0, 11, 38, 10, 38, 16, 31, 38, 16, 31, 2, 10, 38, 12, 12, 38, 25, 38, 18, 16, 9, 38, 16, 9, 26, 5, 16, 24, 2, 6, 10, 16, 31, 28, 24, 26, 38, 18, 16, 31, 2, 10, 29, 5, 28, 2, 38, 16, 30, 2, 6, 10, 38, 16, 9, 26, 2, 10, 38, 10, 38, 35, 31, 16, 31, 16, 31, 28, 24, 26, 2, 6, 10, 22, 12, 12, 38, 25, 38, 35, 9, 16, 26, 22, 12, 30, 1, 5, 5, 2, 12, 12, 15, 38, 16, 31, 36, 28, 18, 38, 10, 38, 31, 16, 31, 5, 16, 22, 12, 12, 38, 25, 38, 10, 38, 18, 38, 16, 31, 18, 6, 38, 16, 31, 28, 24, 26, 39, 2, 12, 5, 12, 38, 40, 18, 18, 38, 16, 9, 26, 21, 18, 38, 16, 30, 28, 18, 38, 16, 31, 2, 10, 38, 31, 36, 26, 2, 10, 24, 2, 10, 38, 10, 30, 18, 6, 38, 16, 31, 2, 10, 38, 31, 36, 26, 2, 10, 10, 38, 18, 10, 38, 16, 27, 6, 5, 5, 16, 38, 18, 18, 38, 12, 12, 38, 18, 18, 38, 41, 10, 12, 38, 12, 12, 30, 24, 17, 10, 0, 30, 21, 38, 6, 6, 12, 22, 16, 30, 2, 6, 18, 18, 38, 12, 38, 35, 10, 30, 38, 16, 30, 17, 10, 0, 17, 10, 0, 30, 2, 6, 11, 38, 18, 38, 32, 30, 29, 38, 16, 9, 36, 26, 2, 19, 24, 16, 38, 41, 16, 30, 0, 30, 24, 2, 10, 38, 16, 30, 18, 10, 5, 16, 0, 30, 21, 24, 2, 10, 38, 16, 30, 18, 24, 26, 17, 10, 18, 5, 2, 10, 5, 2, 10, 5, 17, 6, 11, 38, 25, 38, 16, 9, 36, 26, 16, 18, 38, 2, 1, 30, 6, 5, 2, 10, 38, 41, 16, 30, 2, 10, 5, 2, 10, 16, 38, 5, 2, 10, 30, 36, 26, 16, 10, 38, 10, 5, 16, 31, 29, 18, 9, 26, 16, 24, 16, 38, 10, 5, 16, 18, 9, 26, 38, 41, 16, 30, 2, 10, 0, 11, 24, 17, 10, 5, 2, 10, 38, 10, 30, 28, 21, 38, 16, 30, 5, 17, 10, 5, 2, 10, 38, 17, 10, 5, 17, 10, 38, 0, 18, 2, 10, 30, 38, 25, 38, 25, 38, 6, 28, 12, 18, 38, 18, 38, 12, 30, 17, 10, 38, 18, 16, 30, 2, 10, 38, 33, 27, 5, 16, 27, 22, 39, 3, 30, 2, 10, 5, 10, 38, 16, 9, 19, 26, 16, 38, 40, 1, 11, 15, 10, 0, 33, 31, 16, 24, 26, 5, 16, 22, 2, 10, 5, 11, 33, 31, 5, 10, 38, 0, 35, 6, 5, 16, 31, 18, 22, 3, 30, 2, 7, 10, 5, 29, 11, 24, 26, 5, 11, 5, 2, 10, 38, 35, 9, 12, 26, 24, 16, 22, 31, 36, 16, 29, 21, 17, 11, 5, 16, 22, 2, 30, 6, 24, 26, 16, 21, 38, 16, 31, 2, 10, 38, 32, 30, 18, 10, 5, 16, 24, 26, 16, 38, 6, 0, 6, 0, 6, 5, 10, 38, 33, 30, 12, 10, 5, 16, 22, 25, 38, 12, 30, 6, 38, 16, 30, 2, 6, 10, 28, 11, 38, 28, 2, 10, 38, 28, 11, 38, 11, 22, 35, 38, 16, 31, 16, 30, 38, 41, 16, 30, 18, 5, 17, 10, 38, 16, 31, 6, 10, 5, 11, 31, 29, 5, 38, 0, 17, 6, 10, 30, 29, 5, 2, 10, 38, 41, 16, 30, 6, 5, 2, 10, 38, 18, 16, 30, 17, 10, 38, 17, 10, 5, 12, 38, 12, 27, 38, 2, 6, 10, 38, 41, 5, 16, 30, 17, 10, 0, 30, 2, 11, 38, 10, 30, 10, 24, 6, 10, 38, 18, 5, 2, 10, 5, 2, 10, 5, 2, 10, 9, 26, 29, 29, 38, 0, 2, 6, 10, 9, 26, 29, 18, 7, 5, 2, 10, 15, 10, 38, 18, 16, 27, 38, 0, 16, 30, 2, 6, 10, 18, 41, 28, 5, 10, 42, 18, 38, 12, 15, 10, 38, 16, 9, 26, 16, 21, 38, 16, 9, 26, 2, 6, 6, 10, 38, 41, 16, 30, 18, 5, 2, 10, 38, 2, 6, 10, 5, 16, 27, 18, 6, 18, 24, 26, 17, 10, 21, 38, 41, 16, 30, 18, 2, 10, 38, 10, 5, 2, 6, 10, 5, 2, 6, 6, 10, 5, 17, 10, 30, 29, 28, 6, 5, 2, 10, 5, 2, 10, 10, 38, 18, -1, 16, 31, 29, 22, 41, 5, 2, 5, 16, 30, 6, 38, 18, 16, 30, 2, 10, 38, 12, 6, 10, 38, 16, 31, 29, 38, 41, 16, 30, 21, 17, 10, 24, 16, 0, 30, 16, 21, 5, 2, 10, 0, 5, 2, 10, 5, 2, 10, 38, 5, 32, 10, 2, 10, 30, 16, 20, 6, 38, 16, 30, 16, 5, 2, 10, 5, 16, 27, 29, 5, 2, 10, 38, 16, 18, 30, 16, 24, 12, 38, 2, 30, 17, 10, 38, 30, 16, 18, 29, 22, 41, 12, 30, 16, 0, 30, 16, 5, 10, 38, 5, 2, 12, 30, 16, 18, 38, 10, 38, 16, 31, 2, 10, 5, 2, 10, 38, 12, 12, 38, 33, 22, 10, 38, 2, 10, 38, 12, 12, 38, 16, 38, 16, 38, 18, -1, 26, 38, 10, 38, 16, 31, 16, 38, 12, 12, 38, 0, -1, 16, 30, 36, 6, -1, 31, 16, 16, 32, 27, 12, 22, 10, 38, 10, 22, 12, 38, 25, 38, 6, 6, 10, 38, 40, 12, 9, 36, 26, 2, 10, 38, 12, 12, 38, 12, 12, 22, 41, 12, 30, 18, 24, 2, 10, 38, 16, 30, 16, 0, 30, 17, 10, 21, 38, 12, 41, 5, 2, 10, 5, 32, 2, 10, 5, 2, 6, 10, 9, 26, 5, 10, 5, 2, 10, 27, 18, 6, 42, 38, 25, 22, 25, 22, 13, 22, 25, 22, 26, 38, 11, 22, 25, 22, 41, 2, 10, 5, 2, 10, 10, 30, 29, 38, 12, 30, 17, 10, 38, 12, 38, 25, 38, 16, 30, 2, 10, 22, 41, 18, 38, 5, 17, 10, 38, 42, 33, 30, 17, 10, 22, 12, 12, 38, 12, 12, 38, 10, 38, 16, 31, 30, 12, 12, 38, 12, 12, 38, 12, 12, 12, 38, 10, 38, 16, 30, 26, 16, 12, 38, 2, 10, 38, 11, 22, 11, 22, 11, 22, 10, 38, 26, 38, 33, 30, 17, 6, 10, 22, 12, 12, 38, 16, -1, 16, -1, 16, 31, 36, 18, 26, 38, 10, 38, 18, 38, 32, 31, 16, 5, 8, 38, 6, 0, 6, 22, 12, 12, 38, 25, 38, 25, 38, 6, 38, 41, 16, 30, 17, 10, 21, 5, 2, 10, 38, 25, 38, 25, 38, 16, 31, 6, 38, 10, 38, 12, 30, 17, 6, 10, 38, 2, 10, 38, 11, 22, 11, 22, 11, 22, 12, 38, 39, 6, 38, 26, 16, 10, 38, 16, 31, 24, 26, 2, 18, 6, 38, 12, 12, 38, 9, 36, 1, 16, 18, -1, 9, 16, 26, 24, 26, 17, 10, 38, 5, 16, 9, 38, 25, -1, 12, 41, 28, 21, 10, 42, 38, 18, 38, 33, 30, 2, 22, 12, 12, 38, 25, 38, 32, 30, 17, 10, 38, 12, 41, 5, 2, 10, 5, 16, 31, 42, 38, 16, 30, 5, 2, 10, 38, 41, 12, 5, 2, 10, 38, 12, 41, 28, 24, 2, 10, 42, 38, 18, 38, 33, 31, 16, 29, 5, 18, 22, 12, 12, 38, 25, -1, 25, -1, 2, 31, 17, 10, 11, 38, 12, 41, 28, 2, 6, 10, 42, 38, 33, 31, 18, 29, 5, 2, 1, 22, 12, 12, 41, 18, 42, 38, 25, 38, 11, 0, 10, 38, 12, 41, 5, 2, 10, 42, 38, 16, 30, 11, 0, 10, 5, 10, 38, 2, 10, 38, 6, 10, 10, 38, 11, 22, 6, 6, 10, 38, 11, 22, 6, 6, 10, 38, 11, 22, 12, 38, 25, 38, 0, 33, 30, 2, 22, 12, 12, 41, 18, 42, 38, 25, 38, 2, 10, 10, 38, 10, 38, 16, 30, 2, 10, 15, 10, 22, 16, 30, 2, 6, 10, 22, 16, 30, 2, 6, 10, 22, 41, 12, 11, 5, 2, 12, 38, 2, 10, 18, 5, 11, 15, 11, 38, 12, 12, 38, 25, 38, 2, 30, 6, 38, 2, 30, 6, 38, 2, 30, 6, 38, 41, 12, 30, 2, 10, 5, 17, 10, 38, 16, 30, 18, 24, 26, 38, 12, 12, 41, 18, 0, 18, 42, 38, 1, 10, 38, 1, 10, -1, 10, 38, 18, 38, 18, 5, 16, 38, 12, 12, 38, 5, 1, 11, 38, 6, 10, 38, 16, 31, 29, 16, 38, 28, 14, 17, 11, -1, 16, 31, -1, 16, 31, 16, 31, 36, 6, -1, 12, 38, 25, 38, 16, 31, 2, 10, 38, 12, 12, 38, 25, 38, 25, 38, 16, 31, 36, 6, 38, 16, 9, 36, 26, 38, 16, 31, 36, 18, 26, 38, 11, 31, 16, 7, 38, 16, 9, 26, 16, 22, 16, 31, 36, 18, 26, 16, 38, 11, 31, 6, 38, 16, 9, 26, 11, 38, 0, 16, 38, 25, 38, 25, 38, 25, 38, 16, 9, 36, 26, 38, 16, 9, 36, 26, 38, 25, 38, 26, 16, 38, 0, 16, 9, 36, 38, 41, 18, 12, 30, 29, 28, 1, 5, 17, 11, 38, 16, 30, 5, 2, 6, 10, 38, 30, 18, 18, 38, 0, 30, 17, 11, 18, 18, 5, 2, 10, 5, 2, 10, 15, 11, 38, 25, 38, 16, 31, 16, 26, 16, 38, 0, 38, 16, 9, 36, 26, 38, 12, 38, 25, 38, 32, 30, 18, 6, 38, 6, 10, 38, 2, 10, 38, 16, 31, 36, 28, 24, 26, 16, 38, 12, 12, 38, 0, 38, 18, 38, 18, 38, 16, 31, 36, 26, 38, 10, 38, 16, 31, 29, 24, 26, 38, 16, 31, 38, 41, 16, 30, 2, 10, 5, 10, 5, 17, 10, 38, 10, 38], 'genre': 'Drama', 'subgenre': 'drama', 'year': '1919', 'quarter_cent': '1900-1924', 'decade': '1910s', 'title': 'Fame and the poet', 'author': 'Dunsany [Edward John Moreton Drax Plunkett]', 'notes': '', 'comments': 'selected from larger file', 'period': '1850-1920', 'id': '317'} ``` ### Data Fields There are three configs in this dataset- `plain`, `class` and `pos`. `plain` is a simple text dataset whereas `pos` and `class` are both annotated datasets containing pos tagging. A `plain` data point has the following fields: ``` { "text": The text in the sample("string"), "genre": The genre of the text("string"), "subgenre": The subgenre of the text("string"), "year": The year the text was produced("string"), "quarter_cent": The quarter century in which the text was produced("string"), "decade": The decade the text was produced("string"), "title": The title of the text("string"), "author": The author of the text("string"), "notes": Notes about the text, if any("string"), "comments": Commentsabout the text, if any("string"), "period": 70-year period during which the text was produced("string"), "id": Unqiue identifier("string"), } ``` A typical `pos`/`class` data point has the following fields: ``` { "text": The tokens in the sample(list("string")), "pos_tags": Corresponding POS tags for the tokens (list("string")) "genre": The genre of the text("string"), "subgenre": The subgenre of the text("string"), "year": The year the text was produced("string"), "quarter_cent": The quarter century in which the text was produced("string"), "decade": The decade the text was produced("string"), "title": The title of the text("string"), "author": The author of the text("string"), "notes": Notes about the text, if any("string"), "comments": Commentsabout the text, if any("string"), "period": 70-year period during which the text was produced("string"), "id": Unqiue identifier("string"), } ``` ### Data Splits Train: 333 ## Dataset Creation ### Curation Rationale The Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of British English from 17101920, grouped into three 70-year periods (De Smet 2005; Diller et al. 2011). The history, versions and specifics of corpus composition can be followed up by referring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i) plain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence per line). Version CLMET3.1 is the result of making CLMET available in a CQP format for use in CWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While there is no change to the selection of texts, CLMET3.1 includes additions and changes in linguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization and retagging, (b) fixing of some systematic issues that come with historical data, and (c) enhancing annotation by adding lemmas and simplified part-of-speech class tags ### Source Data #### Initial Data Collection and Normalization The initial data is from OCR of texts in English from 1710-1920 #### Who are the source language producers? The text was produced by the authors of the original work and then OCRd ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information This dataset does not contain any personal information as these are historic texts. Some content might be sensitive ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations Dealing with historical data, tagging remains problematic in all areas, and should be treated with caution (especially with noun recognition) and/or combined with more coarse-grained class queries. Also bear in mind that the lemmas for unknown items are in lower case, while proper names that the tagger did recognize are not necessarily all lower case. In addition, lemmatization may not be consistent, e.g. in the area of -ize/ise spellings; these were not homogenized to preserve as much of the original orthography as possible. ## Additional Information ### Dataset Curators The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk� ### Licensing Information Creative Commons Attribution Non Commercial Share Alike 4.0 International ### Citation Information [Needs More Information]
biglam/clmet_3_1
[ "task_categories:text-classification", "task_categories:fill-mask", "task_ids:multi-label-classification", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2022-07-17T22:27:04+00:00
{"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification", "fill-mask"], "task_ids": ["multi-label-classification", "masked-language-modeling"], "pretty_name": "Corpus of Late Modern English Texts v3.1"}
2022-07-18T01:14:38+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-fill-mask #task_ids-multi-label-classification #task_ids-masked-language-modeling #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us
Dataset Card for clmet\_3\_1 ============================ NOTES: * Some of the annotations in the 'class' and 'pos' configs are not properly formed. These are indicated with warning messages when the dataset is loaded. * In addition to the classes mentioned in the README for the dataset, there is an additional class in the 'class' dataset called 'QUOT'. As far as I can tell, this is used for tagging all quotation marks * When the 'class' and 'pos' configs are loaded, the available class/pos tags are shown at the top Dataset Statistics: ------------------- The following table summarises the corpus make-up: ``` | | 1710-1780| 1780-1850 | 1850-1920 | ``` |Narrative fiction | 5,405,645 | 5,780,352 | 7,561,339 | |Narrative non-fiction | 2,145,946 | 2,261,485 | 1,097,487 | |Drama | 523,318 | 441,040 | 763,352 | |Letters | 1,208,219 | 842,795 | 554,046 | |Treatise | 1,263,090 | 1,927,272 | 2,030,210 | |Other | 1,635,846 | 2,047,513 | 2,851,805 | Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact: Henrik De Smet ### Dataset Summary The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is: * The corpus covers the period 17101920, divided into three 70-year sub-periods. * The texts making up the corpus have all been written by British and Irish authors who are native speakers of English. * The corpus never contains more than three texts by the same author. * The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period. ### Supported Tasks and Leaderboards * 'named-entity-recognition': Since this dataset is tagged, it can be used for performing NER * 'text-classification': Each text comes with the date of the text and can be used to perform stylistic classification of texts ### Languages The text in the dataset is in English. The associated BCP-47 code is 'en' Dataset Structure ----------------- ### Data Instances A 'plain' sample looks as follows: A 'pos' sample looks as follows: ### Data Fields There are three configs in this dataset- 'plain', 'class' and 'pos'. 'plain' is a simple text dataset whereas 'pos' and 'class' are both annotated datasets containing pos tagging. A 'plain' data point has the following fields: A typical 'pos'/'class' data point has the following fields: ### Data Splits Train: 333 Dataset Creation ---------------- ### Curation Rationale The Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of British English from 17101920, grouped into three 70-year periods (De Smet 2005; Diller et al. 2011). The history, versions and specifics of corpus composition can be followed up by referring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i) plain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence per line). Version CLMET3.1 is the result of making CLMET available in a CQP format for use in CWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While there is no change to the selection of texts, CLMET3.1 includes additions and changes in linguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization and retagging, (b) fixing of some systematic issues that come with historical data, and (c) enhancing annotation by adding lemmas and simplified part-of-speech class tags ### Source Data #### Initial Data Collection and Normalization The initial data is from OCR of texts in English from 1710-1920 #### Who are the source language producers? The text was produced by the authors of the original work and then OCRd ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information This dataset does not contain any personal information as these are historic texts. Some content might be sensitive Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dealing with historical data, tagging remains problematic in all areas, and should be treated with caution (especially with noun recognition) and/or combined with more coarse-grained class queries. Also bear in mind that the lemmas for unknown items are in lower case, while proper names that the tagger did recognize are not necessarily all lower case. In addition, lemmatization may not be consistent, e.g. in the area of -ize/ise spellings; these were not homogenized to preserve as much of the original orthography as possible. Additional Information ---------------------- ### Dataset Curators The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk� ### Licensing Information Creative Commons Attribution Non Commercial Share Alike 4.0 International
[ "### Dataset Summary\n\n\nThe Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is:\n\n\n* The corpus covers the period 1710\u00131920, divided into three 70-year sub-periods.\n* The texts making up the corpus have all been written by British and Irish authors who are native speakers of English.\n* The corpus never contains more than three texts by the same author.\n* The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period.", "### Supported Tasks and Leaderboards\n\n\n* 'named-entity-recognition': Since this dataset is tagged, it can be used for performing NER\n* 'text-classification': Each text comes with the date of the text and can be used to perform stylistic classification of texts", "### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA 'plain' sample looks as follows:\n\n\nA 'pos' sample looks as follows:", "### Data Fields\n\n\nThere are three configs in this dataset- 'plain', 'class' and 'pos'. 'plain' is a simple text dataset whereas 'pos' and 'class' are both annotated datasets containing pos tagging. A 'plain' data point has the following fields:\n\n\nA typical 'pos'/'class' data point has the following fields:", "### Data Splits\n\n\nTrain: 333\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of\nBritish English from 1710\u00131920, grouped into three 70-year periods (De Smet 2005; Diller et\nal. 2011). The history, versions and specifics of corpus composition can be followed up by\nreferring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i)\nplain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence\nper line).\nVersion CLMET3.1 is the result of making CLMET available in a CQP format for use in\nCWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While\nthere is no change to the selection of texts, CLMET3.1 includes additions and changes in\nlinguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization\nand retagging, (b) fixing of some systematic issues that come with historical data, and (c)\nenhancing annotation by adding lemmas and simplified part-of-speech class tags", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe initial data is from OCR of texts in English from 1710-1920", "#### Who are the source language producers?\n\n\nThe text was produced by the authors of the original work and then OCRd", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThis dataset does not contain any personal information as these are historic texts. Some content might be sensitive\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nDealing with historical data, tagging remains problematic in all areas, and should be treated\nwith caution (especially with noun recognition) and/or combined with more coarse-grained\nclass queries. Also bear in mind that the lemmas for unknown items are in lower\ncase, while proper names that the tagger did recognize are not necessarily all lower case. In\naddition, lemmatization may not be consistent, e.g. in the area of -ize/ise spellings; these were\nnot homogenized to preserve as much of the original orthography as possible.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�", "### Licensing Information\n\n\nCreative Commons Attribution Non Commercial Share Alike 4.0 International" ]
[ "TAGS\n#task_categories-text-classification #task_categories-fill-mask #task_ids-multi-label-classification #task_ids-masked-language-modeling #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nThe Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is:\n\n\n* The corpus covers the period 1710\u00131920, divided into three 70-year sub-periods.\n* The texts making up the corpus have all been written by British and Irish authors who are native speakers of English.\n* The corpus never contains more than three texts by the same author.\n* The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period.", "### Supported Tasks and Leaderboards\n\n\n* 'named-entity-recognition': Since this dataset is tagged, it can be used for performing NER\n* 'text-classification': Each text comes with the date of the text and can be used to perform stylistic classification of texts", "### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA 'plain' sample looks as follows:\n\n\nA 'pos' sample looks as follows:", "### Data Fields\n\n\nThere are three configs in this dataset- 'plain', 'class' and 'pos'. 'plain' is a simple text dataset whereas 'pos' and 'class' are both annotated datasets containing pos tagging. A 'plain' data point has the following fields:\n\n\nA typical 'pos'/'class' data point has the following fields:", "### Data Splits\n\n\nTrain: 333\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of\nBritish English from 1710\u00131920, grouped into three 70-year periods (De Smet 2005; Diller et\nal. 2011). The history, versions and specifics of corpus composition can be followed up by\nreferring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i)\nplain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence\nper line).\nVersion CLMET3.1 is the result of making CLMET available in a CQP format for use in\nCWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While\nthere is no change to the selection of texts, CLMET3.1 includes additions and changes in\nlinguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization\nand retagging, (b) fixing of some systematic issues that come with historical data, and (c)\nenhancing annotation by adding lemmas and simplified part-of-speech class tags", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe initial data is from OCR of texts in English from 1710-1920", "#### Who are the source language producers?\n\n\nThe text was produced by the authors of the original work and then OCRd", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThis dataset does not contain any personal information as these are historic texts. Some content might be sensitive\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nDealing with historical data, tagging remains problematic in all areas, and should be treated\nwith caution (especially with noun recognition) and/or combined with more coarse-grained\nclass queries. Also bear in mind that the lemmas for unknown items are in lower\ncase, while proper names that the tagger did recognize are not necessarily all lower case. In\naddition, lemmatization may not be consistent, e.g. in the area of -ize/ise spellings; these were\nnot homogenized to preserve as much of the original orthography as possible.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�", "### Licensing Information\n\n\nCreative Commons Attribution Non Commercial Share Alike 4.0 International" ]
[ 128, 238, 69, 31, 27, 92, 15, 261, 4, 27, 27, 5, 5, 9, 39, 7, 8, 141, 47, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-fill-mask #task_ids-multi-label-classification #task_ids-masked-language-modeling #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nThe Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is:\n\n\n* The corpus covers the period 1710\u00131920, divided into three 70-year sub-periods.\n* The texts making up the corpus have all been written by British and Irish authors who are native speakers of English.\n* The corpus never contains more than three texts by the same author.\n* The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period.### Supported Tasks and Leaderboards\n\n\n* 'named-entity-recognition': Since this dataset is tagged, it can be used for performing NER\n* 'text-classification': Each text comes with the date of the text and can be used to perform stylistic classification of texts### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA 'plain' sample looks as follows:\n\n\nA 'pos' sample looks as follows:", "passage: ### Data Fields\n\n\nThere are three configs in this dataset- 'plain', 'class' and 'pos'. 'plain' is a simple text dataset whereas 'pos' and 'class' are both annotated datasets containing pos tagging. A 'plain' data point has the following fields:\n\n\nA typical 'pos'/'class' data point has the following fields:### Data Splits\n\n\nTrain: 333\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of\nBritish English from 1710\u00131920, grouped into three 70-year periods (De Smet 2005; Diller et\nal. 2011). The history, versions and specifics of corpus composition can be followed up by\nreferring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i)\nplain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence\nper line).\nVersion CLMET3.1 is the result of making CLMET available in a CQP format for use in\nCWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While\nthere is no change to the selection of texts, CLMET3.1 includes additions and changes in\nlinguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization\nand retagging, (b) fixing of some systematic issues that come with historical data, and (c)\nenhancing annotation by adding lemmas and simplified part-of-speech class tags### Source Data#### Initial Data Collection and Normalization\n\n\nThe initial data is from OCR of texts in English from 1710-1920#### Who are the source language producers?\n\n\nThe text was produced by the authors of the original work and then OCRd### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nThis dataset does not contain any personal information as these are historic texts. Some content might be sensitive\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
9baf6183ae9aeecfd261cb36f0d001e90bc77c57
# PubLayNet PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is [PubMed Central Open Access Subset (commercial use collection)](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper ["PubLayNet: largest dataset ever for document layout analysis."](https://arxiv.org/abs/1908.07836). The public dataset is in tar.gz format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found [here](https://developer.ibm.com/exchanges/data/all/publaynet/). Licence: [Community Data License Agreement – Permissive – Version 1.0 License](https://cdla.dev/permissive-1-0/) Author: IBM GitHub: https://github.com/ibm-aur-nlp/PubLayNet @article{ zhong2019publaynet, title = { PubLayNet: largest dataset ever for document layout analysis }, author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno }, journal = { arXiv preprint arXiv:1908.07836}, year. = { 2019 } }
jordanparker6/publaynet
[ "task_categories:image-to-text", "size_categories:100B<n<1T", "language:en", "license:other", "arxiv:1908.07836", "region:us" ]
2022-07-17T22:32:26+00:00
{"annotations_creators": [], "language": ["en"], "license": "other", "size_categories": ["100B<n<1T"], "source_datasets": [], "task_categories": ["image-to-text"], "task_ids": [], "title": "PubLayNet"}
2022-07-19T03:20:00+00:00
[ "1908.07836" ]
[ "en" ]
TAGS #task_categories-image-to-text #size_categories-100B<n<1T #language-English #license-other #arxiv-1908.07836 #region-us
# PubLayNet PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is PubMed Central Open Access Subset (commercial use collection). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper "PubLayNet: largest dataset ever for document layout analysis.". The public dataset is in URL format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found here. Licence: Community Data License Agreement – Permissive – Version 1.0 License Author: IBM GitHub: URL @article{ zhong2019publaynet, title = { PubLayNet: largest dataset ever for document layout analysis }, author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno }, journal = { arXiv preprint arXiv:1908.07836}, year. = { 2019 } }
[ "# PubLayNet\n\nPubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is PubMed Central Open Access Subset (commercial use collection). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper \"PubLayNet: largest dataset ever for document layout analysis.\".\n\nThe public dataset is in URL format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found here. \n\nLicence: Community Data License Agreement – Permissive – Version 1.0 License\n\nAuthor: IBM\n\nGitHub: URL\n\n@article{ zhong2019publaynet,\n title = { PubLayNet: largest dataset ever for document layout analysis },\n author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno },\n journal = { arXiv preprint arXiv:1908.07836},\n year. = { 2019 }\n}" ]
[ "TAGS\n#task_categories-image-to-text #size_categories-100B<n<1T #language-English #license-other #arxiv-1908.07836 #region-us \n", "# PubLayNet\n\nPubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is PubMed Central Open Access Subset (commercial use collection). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper \"PubLayNet: largest dataset ever for document layout analysis.\".\n\nThe public dataset is in URL format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found here. \n\nLicence: Community Data License Agreement – Permissive – Version 1.0 License\n\nAuthor: IBM\n\nGitHub: URL\n\n@article{ zhong2019publaynet,\n title = { PubLayNet: largest dataset ever for document layout analysis },\n author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno },\n journal = { arXiv preprint arXiv:1908.07836},\n year. = { 2019 }\n}" ]
[ 47, 262 ]
[ "passage: TAGS\n#task_categories-image-to-text #size_categories-100B<n<1T #language-English #license-other #arxiv-1908.07836 #region-us \n# PubLayNet\n\nPubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is PubMed Central Open Access Subset (commercial use collection). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper \"PubLayNet: largest dataset ever for document layout analysis.\".\n\nThe public dataset is in URL format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found here. \n\nLicence: Community Data License Agreement – Permissive – Version 1.0 License\n\nAuthor: IBM\n\nGitHub: URL\n\n@article{ zhong2019publaynet,\n title = { PubLayNet: largest dataset ever for document layout analysis },\n author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno },\n journal = { arXiv preprint arXiv:1908.07836},\n year. = { 2019 }\n}" ]
a8a496123b40fa739da2acf9b0dae339d30c7bae
# Dataset Card for Imagenette ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/fastai/imagenette - **Repository:** https://github.com/fastai/imagenette - **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagenette ### Dataset Summary A smaller subset of 10 easily classified classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary), and a little more French. This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset. ### Supported Tasks and Leaderboards - `image-classification`: The dataset can be used to train a model for Image Classification. ### Languages The class labels in the dataset are in English. ## Dataset Structure ### Data Instances A data point comprises an image URL and its classification label. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>, 'label': 'tench', } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. - `label`: the expected class label of the image. ### Data Splits | |train|validation| |----------|----:|---------:| |imagenette| 9469| 3925| ## Dataset Creation ### Curation Rationale cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale ### Source Data #### Initial Data Collection and Normalization Imagenette is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization). ### Annotations #### Annotation process cf. https://huggingface.co/datasets/imagenet-1k#annotation-process #### Who are the annotators? cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators ### Personal and Sensitive Information cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information ## Considerations for Using the Data ### Social Impact of Dataset cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset ### Discussion of Biases cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases ### Other Known Limitations cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations ## Additional Information ### Dataset Curators cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators and Jeremy Howard ### Licensing Information [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @software{Howard_Imagenette_2019, title={Imagenette: A smaller subset of 10 easily classified classes from Imagenet}, author={Jeremy Howard}, year={2019}, month={March}, publisher = {GitHub}, url = {https://github.com/fastai/imagenette} } ``` ### Contributions This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
frgfm/imagenette
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "size_categories:1K<n<10K", "source_datasets:extended", "language:en", "license:apache-2.0", "region:us" ]
2022-07-17T23:13:35+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["extended"], "task_categories": ["image-classification"], "task_ids": [], "paperswithcode_id": "imagenette", "pretty_name": "Imagenette"}
2022-12-11T22:26:06+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-extended #language-English #license-apache-2.0 #region-us
Dataset Card for Imagenette =========================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Leaderboard: URL ### Dataset Summary A smaller subset of 10 easily classified classes from Imagenet, and a little more French. This dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset. ### Supported Tasks and Leaderboards * 'image-classification': The dataset can be used to train a model for Image Classification. ### Languages The class labels in the dataset are in English. Dataset Structure ----------------- ### Data Instances A data point comprises an image URL and its classification label. ### Data Fields * 'image': A 'PIL.Image.Image' object containing the image. * 'label': the expected class label of the image. ### Data Splits Dataset Creation ---------------- ### Curation Rationale cf. URL ### Source Data #### Initial Data Collection and Normalization Imagenette is a subset of ImageNet. Information about data collection of the source data can be found here. ### Annotations #### Annotation process cf. URL #### Who are the annotators? cf. URL ### Personal and Sensitive Information cf. URL Considerations for Using the Data --------------------------------- ### Social Impact of Dataset cf. URL ### Discussion of Biases cf. URL ### Other Known Limitations cf. URL Additional Information ---------------------- ### Dataset Curators cf. URL and Jeremy Howard ### Licensing Information Apache License 2.0. ### Contributions This dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm.
[ "### Dataset Summary\n\n\nA smaller subset of 10 easily classified classes from Imagenet, and a little more French.\nThis dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The dataset can be used to train a model for Image Classification.", "### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point comprises an image URL and its classification label.", "### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': the expected class label of the image.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\ncf. URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nImagenette is a subset of ImageNet. Information about data collection of the source data can be found here.", "### Annotations", "#### Annotation process\n\n\ncf. URL", "#### Who are the annotators?\n\n\ncf. URL", "### Personal and Sensitive Information\n\n\ncf. URL\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\ncf. URL", "### Discussion of Biases\n\n\ncf. URL", "### Other Known Limitations\n\n\ncf. URL\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\ncf. URL\nand Jeremy Howard", "### Licensing Information\n\n\nApache License 2.0.", "### Contributions\n\n\nThis dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm." ]
[ "TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-extended #language-English #license-apache-2.0 #region-us \n", "### Dataset Summary\n\n\nA smaller subset of 10 easily classified classes from Imagenet, and a little more French.\nThis dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The dataset can be used to train a model for Image Classification.", "### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point comprises an image URL and its classification label.", "### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': the expected class label of the image.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\ncf. URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nImagenette is a subset of ImageNet. Information about data collection of the source data can be found here.", "### Annotations", "#### Annotation process\n\n\ncf. URL", "#### Who are the annotators?\n\n\ncf. URL", "### Personal and Sensitive Information\n\n\ncf. URL\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\ncf. URL", "### Discussion of Biases\n\n\ncf. URL", "### Other Known Limitations\n\n\ncf. URL\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\ncf. URL\nand Jeremy Howard", "### Licensing Information\n\n\nApache License 2.0.", "### Contributions\n\n\nThis dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm." ]
[ 75, 75, 33, 23, 20, 37, 11, 11, 4, 33, 5, 9, 13, 22, 11, 12, 18, 13, 11, 41 ]
[ "passage: TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-extended #language-English #license-apache-2.0 #region-us \n### Dataset Summary\n\n\nA smaller subset of 10 easily classified classes from Imagenet, and a little more French.\nThis dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The dataset can be used to train a model for Image Classification.### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA data point comprises an image URL and its classification label.### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': the expected class label of the image.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\ncf. URL### Source Data#### Initial Data Collection and Normalization\n\n\nImagenette is a subset of ImageNet. Information about data collection of the source data can be found here.### Annotations#### Annotation process\n\n\ncf. URL#### Who are the annotators?\n\n\ncf. URL### Personal and Sensitive Information\n\n\ncf. URL\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\ncf. URL### Discussion of Biases\n\n\ncf. URL### Other Known Limitations\n\n\ncf. URL\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\ncf. URL\nand Jeremy Howard### Licensing Information\n\n\nApache License 2.0.### Contributions\n\n\nThis dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm." ]
06bc381446b3c3cb1faaa56c5575c71f101e286a
# Dataset Card for "tner/btc" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://aclanthology.org/C16-1111/](https://aclanthology.org/C16-1111/) - **Dataset:** Broad Twitter Corpus - **Domain:** Twitter - **Number of Entity:** 3 ### Dataset Summary Broad Twitter Corpus NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `LOC`, `ORG`, `PER` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['I', 'hate', 'the', 'words', 'chunder', ',', 'vomit', 'and', 'puke', '.', 'BUUH', '.'], 'tags': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/btc/raw/main/dataset/label.json). ```python { "B-LOC": 0, "B-ORG": 1, "B-PER": 2, "I-LOC": 3, "I-ORG": 4, "I-PER": 5, "O": 6 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |btc | 6338| 1001|2000| ### Citation Information ``` @inproceedings{derczynski-etal-2016-broad, title = "Broad {T}witter Corpus: A Diverse Named Entity Recognition Resource", author = "Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian", booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers", month = dec, year = "2016", address = "Osaka, Japan", publisher = "The COLING 2016 Organizing Committee", url = "https://aclanthology.org/C16-1111", pages = "1169--1179", abstract = "One of the main obstacles, hampering method development and comparative evaluation of named entity recognition in social media, is the lack of a sizeable, diverse, high quality annotated corpus, analogous to the CoNLL{'}2003 news dataset. For instance, the biggest Ritter tweet corpus is only 45,000 tokens {--} a mere 15{\%} the size of CoNLL{'}2003. Another major shortcoming is the lack of temporal, geographic, and author diversity. This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. The corpus is released openly, including source text and intermediate annotations.", } ```
tner/btc
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "region:us" ]
2022-07-18T09:38:50+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "BTC"}
2022-11-27T19:07:36+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #region-us
Dataset Card for "tner/btc" =========================== Dataset Description ------------------- * Repository: T-NER * Paper: URL * Dataset: Broad Twitter Corpus * Domain: Twitter * Number of Entity: 3 ### Dataset Summary Broad Twitter Corpus NER dataset formatted in a part of TNER project. * Entity Types: 'LOC', 'ORG', 'PER' Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Label ID The label2id dictionary can be found at here. ### Data Splits
[ "### Dataset Summary\n\n\nBroad Twitter Corpus NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'LOC', 'ORG', 'PER'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.", "### Data Splits" ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nBroad Twitter Corpus NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'LOC', 'ORG', 'PER'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.", "### Data Splits" ]
[ 60, 48, 18, 17, 5 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #region-us \n### Dataset Summary\n\n\nBroad Twitter Corpus NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'LOC', 'ORG', 'PER'\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Label ID\n\n\nThe label2id dictionary can be found at here.### Data Splits" ]
cb0fecb243a95034376387309fe8c03f8bf74aee
# Dataset Card for "tner/tweebank_ner" ## Dataset Description - **Repository:** [T-NER](https://github.com/asahi417/tner) - **Paper:** [https://arxiv.org/abs/2201.07281](https://arxiv.org/abs/2201.07281) - **Dataset:** TweeBank NER - **Domain:** Twitter - **Number of Entity:** 4 ### Dataset Summary TweeBank NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project. - Entity Types: `LOC`, `MISC`, `PER`, `ORG` ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['RT', '@USER2362', ':', 'Farmall', 'Heart', 'Of', 'The', 'Holidays', 'Tabletop', 'Christmas', 'Tree', 'With', 'Lights', 'And', 'Motion', 'URL1087', '#Holiday', '#Gifts'], 'tags': [8, 8, 8, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8] } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweebank_ner/raw/main/dataset/label.json). ```python { "B-LOC": 0, "B-MISC": 1, "B-ORG": 2, "B-PER": 3, "I-LOC": 4, "I-MISC": 5, "I-ORG": 6, "I-PER": 7, "O": 8 } ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |tweebank_ner | 1639| 710 |1201| ### Citation Information ``` @article{DBLP:journals/corr/abs-2201-07281, author = {Hang Jiang and Yining Hua and Doug Beeferman and Deb Roy}, title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building {NLP} Models for Social Media Analysis}, journal = {CoRR}, volume = {abs/2201.07281}, year = {2022}, url = {https://arxiv.org/abs/2201.07281}, eprinttype = {arXiv}, eprint = {2201.07281}, timestamp = {Fri, 21 Jan 2022 13:57:15 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
tner/tweebank_ner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2201.07281", "region:us" ]
2022-07-18T09:39:20+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "TweeBank NER"}
2022-11-27T20:59:13+00:00
[ "2201.07281" ]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2201.07281 #region-us
Dataset Card for "tner/tweebank\_ner" ===================================== Dataset Description ------------------- * Repository: T-NER * Paper: URL * Dataset: TweeBank NER * Domain: Twitter * Number of Entity: 4 ### Dataset Summary TweeBank NER dataset formatted in a part of TNER project. * Entity Types: 'LOC', 'MISC', 'PER', 'ORG' Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Label ID The label2id dictionary can be found at here. ### Data Splits
[ "### Dataset Summary\n\n\nTweeBank NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'LOC', 'MISC', 'PER', 'ORG'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.", "### Data Splits" ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2201.07281 #region-us \n", "### Dataset Summary\n\n\nTweeBank NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'LOC', 'MISC', 'PER', 'ORG'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.", "### Data Splits" ]
[ 69, 51, 18, 17, 5 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2201.07281 #region-us \n### Dataset Summary\n\n\nTweeBank NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'LOC', 'MISC', 'PER', 'ORG'\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Label ID\n\n\nThe label2id dictionary can be found at here.### Data Splits" ]
9d9c27f1d4fb18a02e0d8283bac6ebb01c56c458
# Dataset Card for "tner/tweetner7" ## Dataset Description - **Repository:** [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper) - **Paper:** [https://arxiv.org/abs/2210.03797](https://arxiv.org/abs/2210.03797) - **Dataset:** TweetNER7 - **Domain:** Twitter - **Number of Entity:** 7 ### Dataset Summary This is the official repository of TweetNER7 (["Named Entity Recognition in Twitter: A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"](https://arxiv.org/abs/2210.03797)), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021. The tweet collection used in TweetNER7 is same as what used in [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. - Entity Types: `corperation`, `creative_work`, `event`, `group`, `location`, `product`, `person` ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. For verified usernames, we replace its display name (or account name) with symbols `{@}`. For example, a tweet ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek ``` is transformed into the following text. ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}} ``` A simple function to format tweet follows below. ```python import re from urlextract import URLExtract extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" target_format = format_tweet(target) print(target_format) 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' ``` We ask annotators to ignore those special tokens but label the verified users' mentions. ### Data Split | split | number of instances | description | |:------------------|------:|------:| | train_2020 | 4616 | training dataset from September 2019 to August 2020 | | train_2021 | 2495 | training dataset from September 2020 to August 2021 | | train_all | 7111 | combined training dataset of `train_2020` and `train_2021` | | validation_2020 | 576 | validation dataset from September 2019 to August 2020 | | validation_2021 | 310 | validation dataset from September 2020 to August 2021 | | test_2020 | 576 | test dataset from September 2019 to August 2020 | | test_2021 | 2807 | test dataset from September 2020 to August 2021 | | train_random | 4616 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | validation_random | 576 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | extra_2020 | 87880 | extra tweet without annotations from September 2019 to August 2020 | | extra_2021 | 93594 | extra tweet without annotations from September 2020 to August 2021 | For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'], 'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14], 'id': '1183344337016381440', 'date': '2019-10-13' } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweetner7/raw/main/dataset/label.json). ```python { "B-corporation": 0, "B-creative_work": 1, "B-event": 2, "B-group": 3, "B-location": 4, "B-person": 5, "B-product": 6, "I-corporation": 7, "I-creative_work": 8, "I-event": 9, "I-group": 10, "I-location": 11, "I-person": 12, "I-product": 13, "O": 14 } ``` ## Models See full evaluation metrics [here](https://github.com/asahi417/tner/blob/master/MODEL_CARD.md#models-for-tweetner7). ### Main Models | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:--------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-all`](https://huggingface.co/tner/roberta-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.75 | 61.25 | | [`tner/roberta-base-tweetner7-all`](https://huggingface.co/tner/roberta-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.16 | 60.81 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.68 | 61 | | [`tner/twitter-roberta-base-dec2020-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.26 | 60.7 | | [`tner/bertweet-large-tweetner7-all`](https://huggingface.co/tner/bertweet-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.46 | 61.87 | | [`tner/bertweet-base-tweetner7-all`](https://huggingface.co/tner/bertweet-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.36 | 60.52 | | [`tner/bert-large-tweetner7-all`](https://huggingface.co/tner/bert-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.58 | 59 | | [`tner/bert-base-tweetner7-all`](https://huggingface.co/tner/bert-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 62.3 | 57.59 | | [`tner/roberta-large-tweetner7-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.02 | 60.9 | | [`tner/roberta-base-tweetner7-continuous`](https://huggingface.co/tner/roberta-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.47 | 60.01 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.87 | 61.07 | | [`tner/twitter-roberta-base-dec2020-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.51 | 60.57 | | [`tner/bertweet-large-tweetner7-continuous`](https://huggingface.co/tner/bertweet-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.41 | 61.66 | | [`tner/bertweet-base-tweetner7-continuous`](https://huggingface.co/tner/bertweet-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.84 | 61.02 | | [`tner/bert-large-tweetner7-continuous`](https://huggingface.co/tner/bert-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.2 | 57.67 | | [`tner/roberta-large-tweetner7-2021`](https://huggingface.co/tner/roberta-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.05 | 59.11 | | [`tner/roberta-base-tweetner7-2021`](https://huggingface.co/tner/roberta-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 61.76 | 57 | | [`tner/twitter-roberta-base-dec2020-tweetner7-2021`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 63.98 | 58.91 | | [`tner/bertweet-large-tweetner7-2021`](https://huggingface.co/tner/bertweet-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 62.9 | 58.13 | | [`tner/bertweet-base-tweetner7-2021`](https://huggingface.co/tner/bertweet-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 63.09 | 57.35 | | [`tner/bert-large-tweetner7-2021`](https://huggingface.co/tner/bert-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 59.75 | 53.93 | | [`tner/bert-base-tweetner7-2021`](https://huggingface.co/tner/bert-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.67 | 55.5 | | [`tner/roberta-large-tweetner7-2020`](https://huggingface.co/tner/roberta-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.76 | 60 | | [`tner/roberta-base-tweetner7-2020`](https://huggingface.co/tner/roberta-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.21 | 59.11 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 64.28 | 59.31 | | [`tner/twitter-roberta-base-dec2020-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 62.87 | 58.26 | | [`tner/bertweet-large-tweetner7-2020`](https://huggingface.co/tner/bertweet-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.01 | 59.47 | | [`tner/bertweet-base-tweetner7-2020`](https://huggingface.co/tner/bertweet-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 64.06 | 59.44 | | [`tner/bert-large-tweetner7-2020`](https://huggingface.co/tner/bert-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 61.43 | 56.14 | | [`tner/bert-base-tweetner7-2020`](https://huggingface.co/tner/bert-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.09 | 54.67 | Model description follows below. * Model with suffix `-all`: Model fine-tuned on `train_all` and validated on `validation_2021`. * Model with suffix `-continuous`: Model fine-tuned on `train_2021` continuously after fine-tuning on `train_2020` and validated on `validation_2021`. * Model with suffix `-2021`: Model fine-tuned only on `train_2021` and validated on `validation_2021`. * Model with suffix `-2020`: Model fine-tuned only on `train_2021` and validated on `validation_2020`. ### Sub Models (used in ablation study) - Model fine-tuned only on `train_random` and validated on `validation_2020`. | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-random`](https://huggingface.co/tner/roberta-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.33 | 60.96 | | [`tner/twitter-roberta-base-2019-90m-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 63.29 | 58.5 | | [`tner/roberta-base-tweetner7-random`](https://huggingface.co/tner/roberta-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.04 | 59.23 | | [`tner/twitter-roberta-base-dec2020-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 64.72 | 59.97 | | [`tner/bertweet-large-tweetner7-random`](https://huggingface.co/tner/bertweet-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.86 | 60.49 | | [`tner/bertweet-base-tweetner7-random`](https://huggingface.co/tner/bertweet-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.55 | 59.58 | | [`tner/bert-large-tweetner7-random`](https://huggingface.co/tner/bert-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 62.39 | 57.54 | | [`tner/bert-base-tweetner7-random`](https://huggingface.co/tner/bert-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.91 | 55.92 | - Model fine-tuned on the self-labeled dataset on `extra_{2020,2021}` and validated on `validation_2020`. | Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) | |:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:--------------------------------------------------------|------------------:|------------------:| | [`tner/roberta-large-tweetner7-selflabel2020`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.56 | 59.63 | | [`tner/roberta-large-tweetner7-selflabel2021`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.6 | 59.45 | | [`tner/roberta-large-tweetner7-2020-selflabel2020-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2020-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.46 | 60.39 | | [`tner/roberta-large-tweetner7-2020-selflabel2021-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2021-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.52 | 59.45 | | [`tner/roberta-large-tweetner7-selflabel2020-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.15 | 60.23 | | [`tner/roberta-large-tweetner7-selflabel2021-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.48 | 59.41 | Model description follows below. * Model with suffix `-self2020`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). * Model with suffix `-self2021`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). * Model with suffix `-2020-self2020-all`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2020` and `train_2020`. * Model with suffix `-2020-self2021-all`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2021` and `train_2020`. * Model with suffix `-2020-self2020-continuous`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`. * Model with suffix `-2020-self2021-continuous`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`. ### Reproduce Experimental Result To reproduce the experimental result on our AACL paper, please see the repository [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper). ## Citation Information ``` @inproceedings{ushio-etal-2022-tweet, title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts", author = "Ushio, Asahi and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco. and Camacho-Collados, Jose", booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", } ```
tner/tweetner7
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2210.03797", "region:us" ]
2022-07-18T09:39:50+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "TweetNER7"}
2022-11-27T18:50:28+00:00
[ "2210.03797" ]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2210.03797 #region-us
Dataset Card for "tner/tweetner7" ================================= Dataset Description ------------------- * Repository: URL * Paper: URL * Dataset: TweetNER7 * Domain: Twitter * Number of Entity: 7 ### Dataset Summary This is the official repository of TweetNER7 ("Named Entity Recognition in Twitter: A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021. The tweet collection used in TweetNER7 is same as what used in TweetTopic. The dataset is integrated in TweetNLP too. * Entity Types: 'corperation', 'creative\_work', 'event', 'group', 'location', 'product', 'person' ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'. For verified usernames, we replace its display name (or account name) with symbols '{@}'. For example, a tweet is transformed into the following text. A simple function to format tweet follows below. We ask annotators to ignore those special tokens but label the verified users' mentions. ### Data Split For the temporal-shift setting, model should be trained on 'train\_2020' with 'validation\_2020' and evaluate on 'test\_2021'. In general, model would be trained on 'train\_all', the most representative training set with 'validation\_2021' and evaluate on 'test\_2021'. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Label ID The label2id dictionary can be found at here. Models ------ See full evaluation metrics here. ### Main Models Model description follows below. * Model with suffix '-all': Model fine-tuned on 'train\_all' and validated on 'validation\_2021'. * Model with suffix '-continuous': Model fine-tuned on 'train\_2021' continuously after fine-tuning on 'train\_2020' and validated on 'validation\_2021'. * Model with suffix '-2021': Model fine-tuned only on 'train\_2021' and validated on 'validation\_2021'. * Model with suffix '-2020': Model fine-tuned only on 'train\_2021' and validated on 'validation\_2020'. ### Sub Models (used in ablation study) * Model fine-tuned only on 'train\_random' and validated on 'validation\_2020'. * Model fine-tuned on the self-labeled dataset on 'extra\_{2020,2021}' and validated on 'validation\_2020'. Model description follows below. * Model with suffix '-self2020': Fine-tuning on the self-annotated data of 'extra\_2020' split of tweetner7. * Model with suffix '-self2021': Fine-tuning on the self-annotated data of 'extra\_2021' split of tweetner7. * Model with suffix '-2020-self2020-all': Fine-tuning on the self-annotated data of 'extra\_2020' split of tweetner7. Combined training dataset of 'extra\_2020' and 'train\_2020'. * Model with suffix '-2020-self2021-all': Fine-tuning on the self-annotated data of 'extra\_2021' split of tweetner7. Combined training dataset of 'extra\_2021' and 'train\_2020'. * Model with suffix '-2020-self2020-continuous': Fine-tuning on the self-annotated data of 'extra\_2020' split of tweetner7. Fine-tuning on 'train\_2020' and continuing fine-tuning on 'extra\_2020'. * Model with suffix '-2020-self2021-continuous': Fine-tuning on the self-annotated data of 'extra\_2021' split of tweetner7. Fine-tuning on 'train\_2020' and continuing fine-tuning on 'extra\_2020'. ### Reproduce Experimental Result To reproduce the experimental result on our AACL paper, please see the repository URL
[ "### Dataset Summary\n\n\nThis is the official repository of TweetNER7 (\"Named Entity Recognition in Twitter:\nA Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022\"), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021.\nThe tweet collection used in TweetNER7 is same as what used in TweetTopic.\nThe dataset is integrated in TweetNLP too.\n\n\n* Entity Types: 'corperation', 'creative\\_work', 'event', 'group', 'location', 'product', 'person'", "### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.\n\n\nWe ask annotators to ignore those special tokens but label the verified users' mentions.", "### Data Split\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.\n\n\nModels\n------\n\n\nSee full evaluation metrics here.", "### Main Models\n\n\n\nModel description follows below.\n\n\n* Model with suffix '-all': Model fine-tuned on 'train\\_all' and validated on 'validation\\_2021'.\n* Model with suffix '-continuous': Model fine-tuned on 'train\\_2021' continuously after fine-tuning on 'train\\_2020' and validated on 'validation\\_2021'.\n* Model with suffix '-2021': Model fine-tuned only on 'train\\_2021' and validated on 'validation\\_2021'.\n* Model with suffix '-2020': Model fine-tuned only on 'train\\_2021' and validated on 'validation\\_2020'.", "### Sub Models (used in ablation study)\n\n\n* Model fine-tuned only on 'train\\_random' and validated on 'validation\\_2020'.\n\n\n\n* Model fine-tuned on the self-labeled dataset on 'extra\\_{2020,2021}' and validated on 'validation\\_2020'.\n\n\n\nModel description follows below.\n\n\n* Model with suffix '-self2020': Fine-tuning on the self-annotated data of 'extra\\_2020' split of tweetner7.\n* Model with suffix '-self2021': Fine-tuning on the self-annotated data of 'extra\\_2021' split of tweetner7.\n* Model with suffix '-2020-self2020-all': Fine-tuning on the self-annotated data of 'extra\\_2020' split of tweetner7. Combined training dataset of 'extra\\_2020' and 'train\\_2020'.\n* Model with suffix '-2020-self2021-all': Fine-tuning on the self-annotated data of 'extra\\_2021' split of tweetner7. Combined training dataset of 'extra\\_2021' and 'train\\_2020'.\n* Model with suffix '-2020-self2020-continuous': Fine-tuning on the self-annotated data of 'extra\\_2020' split of tweetner7. Fine-tuning on 'train\\_2020' and continuing fine-tuning on 'extra\\_2020'.\n* Model with suffix '-2020-self2021-continuous': Fine-tuning on the self-annotated data of 'extra\\_2021' split of tweetner7. Fine-tuning on 'train\\_2020' and continuing fine-tuning on 'extra\\_2020'.", "### Reproduce Experimental Result\n\n\nTo reproduce the experimental result on our AACL paper, please see the repository\nURL" ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2210.03797 #region-us \n", "### Dataset Summary\n\n\nThis is the official repository of TweetNER7 (\"Named Entity Recognition in Twitter:\nA Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022\"), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021.\nThe tweet collection used in TweetNER7 is same as what used in TweetTopic.\nThe dataset is integrated in TweetNLP too.\n\n\n* Entity Types: 'corperation', 'creative\\_work', 'event', 'group', 'location', 'product', 'person'", "### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.\n\n\nWe ask annotators to ignore those special tokens but label the verified users' mentions.", "### Data Split\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Label ID\n\n\nThe label2id dictionary can be found at here.\n\n\nModels\n------\n\n\nSee full evaluation metrics here.", "### Main Models\n\n\n\nModel description follows below.\n\n\n* Model with suffix '-all': Model fine-tuned on 'train\\_all' and validated on 'validation\\_2021'.\n* Model with suffix '-continuous': Model fine-tuned on 'train\\_2021' continuously after fine-tuning on 'train\\_2020' and validated on 'validation\\_2021'.\n* Model with suffix '-2021': Model fine-tuned only on 'train\\_2021' and validated on 'validation\\_2021'.\n* Model with suffix '-2020': Model fine-tuned only on 'train\\_2021' and validated on 'validation\\_2020'.", "### Sub Models (used in ablation study)\n\n\n* Model fine-tuned only on 'train\\_random' and validated on 'validation\\_2020'.\n\n\n\n* Model fine-tuned on the self-labeled dataset on 'extra\\_{2020,2021}' and validated on 'validation\\_2020'.\n\n\n\nModel description follows below.\n\n\n* Model with suffix '-self2020': Fine-tuning on the self-annotated data of 'extra\\_2020' split of tweetner7.\n* Model with suffix '-self2021': Fine-tuning on the self-annotated data of 'extra\\_2021' split of tweetner7.\n* Model with suffix '-2020-self2020-all': Fine-tuning on the self-annotated data of 'extra\\_2020' split of tweetner7. Combined training dataset of 'extra\\_2020' and 'train\\_2020'.\n* Model with suffix '-2020-self2021-all': Fine-tuning on the self-annotated data of 'extra\\_2021' split of tweetner7. Combined training dataset of 'extra\\_2021' and 'train\\_2020'.\n* Model with suffix '-2020-self2020-continuous': Fine-tuning on the self-annotated data of 'extra\\_2020' split of tweetner7. Fine-tuning on 'train\\_2020' and continuing fine-tuning on 'extra\\_2020'.\n* Model with suffix '-2020-self2021-continuous': Fine-tuning on the self-annotated data of 'extra\\_2021' split of tweetner7. Fine-tuning on 'train\\_2020' and continuing fine-tuning on 'extra\\_2020'.", "### Reproduce Experimental Result\n\n\nTo reproduce the experimental result on our AACL paper, please see the repository\nURL" ]
[ 69, 156, 124, 95, 18, 28, 172, 411, 26 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2210.03797 #region-us \n### Dataset Summary\n\n\nThis is the official repository of TweetNER7 (\"Named Entity Recognition in Twitter:\nA Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022\"), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021.\nThe tweet collection used in TweetNER7 is same as what used in TweetTopic.\nThe dataset is integrated in TweetNLP too.\n\n\n* Entity Types: 'corperation', 'creative\\_work', 'event', 'group', 'location', 'product', 'person'### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.\n\n\nWe ask annotators to ignore those special tokens but label the verified users' mentions.### Data Split\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Label ID\n\n\nThe label2id dictionary can be found at here.\n\n\nModels\n------\n\n\nSee full evaluation metrics here." ]
5bc51fd7d10388377950fee5a9612482d279e189
Top 20 hits for queries from training data in "MS-MARCO v2 passage" by Lucene Searcher (using pyserini) hits@20 0.1957 See also : https://github.com/castorini/pyserini/blob/master/docs/prebuilt-indexes.md For java11 installation in linux : https://stackoverflow.com/questions/52504825/how-to-install-jdk-11-under-ubuntu
Doohae/marcopolo-v2-passage
[ "region:us" ]
2022-07-18T13:53:43+00:00
{}
2022-07-18T14:33:08+00:00
[]
[]
TAGS #region-us
Top 20 hits for queries from training data in "MS-MARCO v2 passage" by Lucene Searcher (using pyserini) hits@20 0.1957 See also : URL For java11 installation in linux : URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
a3c510486e8715aeb27ffb9e3846d2a6ca0f3500
# Dataset Card for MSLR2022 ## Table of Contents - [Dataset Card for MSLR2022](#dataset-card-for-mslr2022) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/allenai/mslr-shared-task - **Repository:** https://github.com/allenai/mslr-shared-task - **Paper:** https://aclanthology.org/2021.emnlp-main.594 - **Leaderboard:** https://github.com/allenai/mslr-shared-task#leaderboard - **Point of Contact:** https://github.com/allenai/mslr-shared-task#contact-us ### Dataset Summary The Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical evidence from different clinical studies are summarized in literature reviews. Reviews provide the highest quality of evidence for clinical care, but are expensive to produce manually. (Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor. The MSLR shared task uses two datasets to assess the current state of multidocument summarization for this task, and to encourage the development of modeling contributions, scaffolding tasks, methods for model interpretability, and improved automated evaluation methods in this domain. ### Supported Tasks and Leaderboards This dataset is used for the MSLR2022 Shared Task. For information on the shared task leaderboard, please refer [here](https://github.com/allenai/mslr-shared-task#leaderboard). ### Languages English ## Dataset Structure More information on dataset structure [here](https://github.com/allenai/mslr-shared-task#data-structure). ### Data Instances __MS^2__ ```json { "review_id": "30760312", "pmid": [ "22776744", "25271670", "3493740", "1863023", "16291984", "23984728", "23996433", "18466198", "12151469", "27400308", "16053970", "22922316", "11897647", "11597664", "4230647" ], "title": [ "Improved Cell Survival and Paracrine Capacity of Human Embryonic Stem Cell-Derived Mesenchymal Stem Cells Promote Therapeutic Potential for Pulmonary Arterial Hypertension", "Adipose-derived stem cells attenuate pulmonary arterial hypertension and ameliorate pulmonary arterial remodeling in monocrotaline-induced pulmonary hypertensive rats", "Effect of bone marrow mesenchymal stem cells on experimental pulmonary arterial hypertension", "Survival in patients with primary pulmonary hypertension. Results from a national prospective registry.", "Sildenafil citrate therapy for pulmonary arterial hypertension.", "Macitentan and morbidity and mortality in pulmonary arterial hypertension.", "Long-term research of stem cells in monocrotaline-induced pulmonary arterial hypertension", "Safety and efficacy of autologous endothelial progenitor cells transplantation in children with idiopathic pulmonary arterial hypertension: open-label pilot study.", "Inhaled iloprost for severe pulmonary hypertension.", "Sildenafil reduces pulmonary vascular resistance in single ventricular physiology.", "Ambrisentan therapy for pulmonary arterial hypertension.", "Mesenchymal stem cell prevention of vascular remodeling in high flow-induced pulmonary hypertension through a paracrine mechanism.", "Continuous subcutaneous infusion of treprostinil, a prostacyclin analogue, in patients with pulmonary arterial hypertension: a double-blind, randomized, placebo-controlled trial.", "Effects of the dual endothelin-receptor antagonist bosentan in patients with pulmonary hypertension: a randomised placebocontrolled study", "SYRCLE\\u2019s risk of bias tool for animal studies" ], "abstract": [ "Although transplantation of adult bone marrow mesenchymal stem cells ( BM-MSCs ) holds promise in the treatment for pulmonary arterial hypertension ( PAH ) , the poor survival and differentiation potential of adult BM-MSCs have limited their therapeutic efficiency . Here , we compared the therapeutic efficacy of human embryonic stem cell-derived MSCs ( hESC-MSCs ) with adult BM-MSCs for the treatment of PAH in an animal model . One week following monocrotaline (MCT)-induced PAH , mice were r and omly assigned to receive phosphate-buffered saline ( MCT group ) ; 3.0 \\u00d7 106 human BM-derived MSCs ( BM-MSCs group ) or 3.0 \\u00d7 106 hESC-derived MSCs ( hESC-MSCs group ) via tail vein injection . At 3 weeks posttransplantation , the right ventricular systolic pressure ( RVSP ) , degree of RV hypertrophy , and medial wall thickening of pulmonary arteries were lower= , and pulmonary capillary density was higher in the hESC-MSC group as compared with BM-MSC and MCT groups ( all p < 0.05 ) . At 1 week posttransplantation , the number of engrafted MSCs in the lungs was found significantly higher in the hESC-MSC group than in the BM-MSC group ( all p < 0.01 ) . At 3 weeks posttransplantation , implanted BM-MSCs were undetectable whereas hESC-MSCs were not only engrafted in injured pulmonary arteries but had also undergone endothelial differentiation . In addition , protein profiling of hESC-MSC- and BM-MSC-conditioned medium revealed a differential paracrine capacity . Classification of these factors into bioprocesses revealed that secreted factors from hESC-MSCs were preferentially involved in early embryonic development and tissue differentiation , especially blood vessel morphogenesis . We concluded that improved cell survival and paracrine capacity of hESC-MSCs provide better therapeutic efficacy than BM-MSCs in the treatment for PAH", "Abstract We investigated the effect of adipose-derived stem cells ( ADSCs ) transplantation effects on structural remodeling and pulmonary artery pressure in monocrotaline (MCT)-induced pulmonary hypertensive rats . In the first experiment , 32 male Sprague-Dawley ( SD ) rats were r and omly divided into four groups ( n = 8/group ) : 3 ADSCs treated groups and normal control ( Ctrl ) . ADSCs were administered through the left jugular vein at 105 , 106 and 107 cells , respectively , and a cell density of 106cells/ml was shown to be optimal . The GFP-tagged ADSCs were identified in the lungs and differentiated into endothelial-like cells . In the second experiment , 96 male SD rats were r and omly divided into three groups ( n = 32/group ) : Ctrl , MCT-induced pulmonary arterial hypertension ( PAH ) , and PAH treated with ADSCs ( ADSCs ) . Two weeks post-MCT administration , the ADSCs group received 1 \\u00d7 106 ADSCs via the external jugular vein . Compared to PAH rats , mean pulmonary arterial pressure was decreased in rats at 1 , 2 , and 3 weeks after ADSCs-treatment ( 18.63 \\u00b1 2.15 mmHg versus 24.53 \\u00b1 2.90 mmHg ; 23.07 \\u00b1 2.84 mmHg versus 33.18 \\u00b1 2.30 mmHg ; 22.98 \\u00b1 2.34 mmHg versus 36.38 \\u00b1 3.28 mmHg , p < 0.05 ) . Meanwhile , the right heart hypertrophy index ( 36.2 1 \\u00b1 4.27 % versus 41.01 \\u00b1 1.29 % ; 39.47 \\u00b1 4.02 % versus 48.75 \\u00b1 2 .13 % ; 41.02 \\u00b1 0.9 % versus 50.52 \\u00b1 1.49 % , p < 0.05 , respectively ) , ratio of wall/lumen thickness , as well as the wall/lumen area were significantly reduced in PAH rats at these time points following ADSCs-treatment , as compared with untreated PAH rats . In summary , ADSCs may colonize the pulmonary arteries , attenuate pulmonary arterial hypertension and ameliorate pulmonary arterial remodeling", "The aim of the present study was to investigate the effect of bone marrow mesenchymal stem cell ( BMSC ) transp1antation on lung and heart damage in a rat model of monocrotaline (MCT)-induced pulmonary arterial hypertension ( PAH ) . The animals were r and omly divided into 3 groups : control , PAH and BMSC implantation groups . Structural changes in the pulmonary vascular wall , such as the pulmonary artery lumen area ( VA ) and vascular area ( TAA ) were measured by hematoxylin and eosin ( H&E ) staining , and the hemodynamics were detected by echocardiography . Two weeks post-operation , our results demonstrated that sublingual vein injection of BMSCs significantly attenuated the pulmonary vascular structural and hemodynamic changes caused by pulmonary arterial hypertension . The mechanism may be executed via paracrine effects", "OBJECTIVE To characterize mortality in persons diagnosed with primary pulmonary hypertension and to investigate factors associated with survival . DESIGN Registry with prospect i ve follow-up . SETTING Thirty-two clinical centers in the United States participating in the Patient Registry for the Characterization of Primary Pulmonary Hypertension supported by the National Heart , Lung , and Blood Institute . PATIENTS Patients ( 194 ) diagnosed at clinical centers between 1 July 1981 and 31 December 1985 and followed through 8 August 1988 . MEASUREMENTS At diagnosis , measurements of hemodynamic variables , pulmonary function , and gas exchange variables were taken in addition to information on demographic variables , medical history , and life-style . Patients were followed for survival at 6-month intervals . MAIN RESULTS The estimated median survival of these patients was 2.8 years ( 95 % Cl , 1.9 to 3.7 years ) . Estimated single-year survival rates were as follows : at 1 year , 68 % ( Cl , 61 % to 75 % ) ; at 3 years , 48 % ( Cl , 41 % to 55 % ) ; and at 5 years , 34 % ( Cl , 24 % to 44 % ) . Variables associated with poor survival included a New York Heart Association ( NYHA ) functional class of III or IV , presence of Raynaud phenomenon , elevated mean right atrial pressure , elevated mean pulmonary artery pressure , decreased cardiac index , and decreased diffusing capacity for carbon monoxide ( DLCO ) . Drug therapy at entry or discharge was not associated with survival duration . CONCLUSIONS Mortality was most closely associated with right ventricular hemodynamic function and can be characterized by means of an equation using three variables : mean pulmonary artery pressure , mean right atrial pressure , and cardiac index . Such an equation , once vali date d prospect ively , could be used as an adjunct in planning treatment strategies and allocating medical re sources", "BACKGROUND Sildenafil inhibits phosphodiesterase type 5 , an enzyme that metabolizes cyclic guanosine monophosphate , thereby enhancing the cyclic guanosine monophosphate-mediated relaxation and growth inhibition of vascular smooth-muscle cells , including those in the lung . METHODS In this double-blind , placebo-controlled study , we r and omly assigned 278 patients with symptomatic pulmonary arterial hypertension ( either idiopathic or associated with connective-tissue disease or with repaired congenital systemic-to-pulmonary shunts ) to placebo or sildenafil ( 20 , 40 , or 80 mg ) orally three times daily for 12 weeks . The primary end point was the change from baseline to week 12 in the distance walked in six minutes . The change in mean pulmonary-artery pressure and World Health Organization ( WHO ) functional class and the incidence of clinical worsening were also assessed , but the study was not powered to assess mortality . Patients completing the 12-week r and omized study could enter a long-term extension study . RESULTS The distance walked in six minutes increased from baseline in all sildenafil groups ; the mean placebo-corrected treatment effects were 45 m ( + 13.0 percent ) , 46 m ( + 13.3 percent ) , and 50 m ( + 14.7 percent ) for 20 , 40 , and 80 mg of sildenafil , respectively ( P<0.001 for all comparisons ) . All sildenafil doses reduced the mean pulmonary-artery pressure ( P=0.04 , P=0.01 , and P<0.001 , respectively ) , improved the WHO functional class ( P=0.003 , P<0.001 , and P<0.001 , respectively ) , and were associated with side effects such as flushing , dyspepsia , and diarrhea . The incidence of clinical worsening did not differ significantly between the patients treated with sildenafil and those treated with placebo . Among the 222 patients completing one year of treatment with sildenafil monotherapy , the improvement from baseline at one year in the distance walked in six minutes was 51 m. CONCLUSIONS Sildenafil improves exercise capacity , WHO functional class , and hemodynamics in patients with symptomatic pulmonary arterial hypertension", "BACKGROUND Current therapies for pulmonary arterial hypertension have been adopted on the basis of short-term trials with exercise capacity as the primary end point . We assessed the efficacy of macitentan , a new dual endothelin-receptor antagonist , using a primary end point of morbidity and mortality in a long-term trial . METHODS We r and omly assigned patients with symptomatic pulmonary arterial hypertension to receive placebo once daily , macitentan at a once-daily dose of 3 mg , or macitentan at a once-daily dose of 10 mg . Stable use of oral or inhaled therapy for pulmonary arterial hypertension , other than endothelin-receptor antagonists , was allowed at study entry . The primary end point was the time from the initiation of treatment to the first occurrence of a composite end point of death , atrial septostomy , lung transplantation , initiation of treatment with intravenous or subcutaneous prostanoids , or worsening of pulmonary arterial hypertension . RESULTS A total of 250 patients were r and omly assigned to placebo , 250 to the 3-mg macitentan dose , and 242 to the 10-mg macitentan dose . The primary end point occurred in 46.4 % , 38.0 % , and 31.4 % of the patients in these groups , respectively . The hazard ratio for the 3-mg macitentan dose as compared with placebo was 0.70 ( 97.5 % confidence interval [ CI ] , 0.52 to 0.96 ; P=0.01 ) , and the hazard ratio for the 10-mg macitentan dose as compared with placebo was 0.55 ( 97.5 % CI , 0.39 to 0.76 ; P<0.001 ) . Worsening of pulmonary arterial hypertension was the most frequent primary end-point event . The effect of macitentan on this end point was observed regardless of whether the patient was receiving therapy for pulmonary arterial hypertension at baseline . Adverse events more frequently associated with macitentan than with placebo were headache , nasopharyngitis , and anemia . CONCLUSIONS Macitentan significantly reduced morbidity and mortality among patients with pulmonary arterial hypertension in this event-driven study . ( Funded by Actelion Pharmaceuticals ; SERAPHIN Clinical Trials.gov number , NCT00660179 . )", "Our previous studies have shown that bone marrow mesenchymal stem cells ( BMSCs ) can inhibit the progression of pulmonary artery hypertension ( PAH ) in the monocrotaline ( MCT ) model in the short term . The aim of this study was to further investigate the long-term effect of BMSCs on PAH and to explore the mechanism of the protective effect including the pulmonary vascular remodeling and cell differentiation . PAH model was established by subcutaneous injection of 50 mg/kg MCT as previously study . Postoperatively , the animals were r and omly divided into three groups ( n = 10 in each group ) : control , PAH group , and BMSCs implantation group . Six months after injection , immunology and immunohistochemistry analysis indicated the MCT-induced intima-media thickness in muscular arteries was reduced ( P < 0.05 ) ; the area of collagen fibers in lung tissue was lower ( P < 0.05 ) , and the proliferating cell nuclear antigen level in pulmonary artery smooth muscle cells was decreased ( P < 0.05 ) . Immunofluorescence showed that the cells have the ability to differentiate between von Willebr and factor and vascular endothelial growth factor . Six months after intravenous injection , BMSCs could significantly improve pulmonary function by inhibiting the ventricular remodeling and the effect of cell differentiation", "Experimental data suggest that transplantation of EPCs attenuates monocrotaline-induced pulmonary hypertension in rats and dogs . In addition , our previous studies suggested that autologous EPC transplantation was feasible , safe , and might have beneficial effects on exercise capacity and pulmonary hemodynamics in adults with IPAH . Thus , we hypothesized that transplantation of EPCs would improve exercise capacity and pulmonary hemodynamics in children with IPAH . Thirteen children with IPAH received intravenous infusion of autologous EPCs . The right-sided heart catheterization and 6-MWD test were performed at baseline and at the time of 12 wk after cell infusion . At the time of 12 wk , mPAP decreased by 6.4 mmHg from 70.3 + /- 19.0 to 63.9 + /- 19.3 mmHg ( p = 0.015 ) . PVR decreased by approximately 19 % from 1118 + /- 537 to 906 + /- 377 dyn s/cm(5 ) ( p = 0.047 ) . CO increased from 3.39 + /- 0.79 to 3.85 + /- 0.42 L/min ( p = 0.048 ) . The 6-MWD increased by 39 m from 359 + /- 82 to 399 + /- 74 m ( p = 0.012 ) . NYHA functional class also improved . There were no severe adverse events with cell infusion . The small pilot study suggested that intravenous infusion of autologous EPCs was feasible , safe , and associated with significant improvements in exercise capacity , NYHA functional class , and pulmonary hemodynamics in children with IPAH . Confirmation of these results in a r and omized controlled trial are essential", "BACKGROUND Uncontrolled studies suggested that aerosolized iloprost , a stable analogue of prostacyclin , causes selective pulmonary vasodilatation and improves hemodynamics and exercise capacity in patients with pulmonary hypertension . METHODS We compared repeated daily inhalations of 2.5 or 5.0 microg of iloprost ( six or nine times per day ; median inhaled dose , 30 microg per day ) with inhalation of placebo . A total of 203 patients with selected forms of severe pulmonary arterial hypertension and chronic thromboembolic pulmonary hypertension ( New York Heart Association [ NYHA ] functional class III or IV ) were included . The primary end point was met if , after week 12 , the NYHA class and distance walked in six minutes were improved by at least one class and at least 10 percent , respectively , in the absence of clinical deterioration according to predefined criteria and death . RESULTS The combined clinical end point was met by 16.8 percent of the patients receiving iloprost , as compared with 4.9 percent of the patients receiving placebo ( P=0.007 ) . There were increases in the distance walked in six minutes of 36.4 m in the iloprost group as a whole ( P=0.004 ) and of 58.8 m in the subgroup of patients with primary pulmonary hypertension . Overall , 4.0 percent of patients in the iloprost group ( including one who died ) and 13.7 percent of those in the placebo group ( including four who died ) did not complete the study ( P=0.024 ) ; the most common reason for withdrawal was clinical deterioration . As compared with base-line values , hemodynamic values were significantly improved at 12 weeks when measured after iloprost inhalation ( P<0.001 ) , were largely unchanged when measured before iloprost inhalation , and were significantly worse in the placebo group . Further significant beneficial effects of iloprost treatment included an improvement in the NYHA class ( P=0.03 ) , dyspnea ( P=0.015 ) , and quality of life ( P=0.026 ) . Syncope occurred with similar frequency in the two groups but was more frequently rated as serious in the iloprost group , although this adverse effect was not associated with clinical deterioration . CONCLUSIONS Inhaled iloprost is an effective therapy for patients with severe pulmonary hypertension", "BACKGROUND High pulmonary vascular resistance ( PVR ) may be a risk factor for early and late mortality in both Glen shunt and Fontan operation patients . Furthermore , PVR may increase long after the Fontan operation . Whether pulmonary vasodilators such as phosphodiesterase 5 inhibitors can decrease PVR in patients with single ventricular physiology remains undetermined . METHODS AND RESULTS This was a prospect i ve , multicenter study . Patients with single ventricular physiology who have a PVR index higher than 2.5 Wood units \\u00b7 \\u33a1 ( WU ) were enrolled . Cardiac catheterization was performed before and after administration of sildenafil in all patients . After the Fontan operation , a six minute walk test ( 6MWT ) was also performed . A total of 42 patients were enrolled . PVR was significantly decreased in each stage of single ventricular physiology after sildenafil administration : from 4.3\\u00b11.5WU to 2.1\\u00b10.6WU ( p<0.01 ) in patients before a Glenn shunt , from 3.2\\u00b10.5WU to 1.6\\u00b10.6WU ( p<0.001 ) in patients after a Glenn shunt , and from 3.9\\u00b11.7WU to 2.3\\u00b10.8WU ( p<0.001 ) in patients after Fontan . In patients after Fontan , the 6MWT increased from 416\\u00b174 m to 485\\u00b172 m ( p<0.01 ) , and NYHA functional class improved significantly ( p<0.05 ) after sildenafil administration . No major side effects were observed in any patients . CONCLUSIONS Sildenafil reduced PVR in patients with single ventricle physiology . Sildenafil increased exercise capacity and improved NYHA functional class in patients after a Fontan operation . This implies that pulmonary vasodilation is a potential therapeutic target in selected patients with elevated PVR with single ventricle physiology . Long-term clinical significance warrants further study", "OBJECTIVES The purpose of this study was to examine the efficacy and safety of four doses of ambrisentan , an oral endothelin type A receptor-selective antagonist , in patients with pulmonary arterial hypertension ( PAH ) . BACKGROUND Pulmonary arterial hypertension is a life-threatening and progressive disease with limited treatment options . Endothelin is a vasoconstrictor and smooth muscle cell mitogen that plays a critical role in the pathogenesis and progression of PAH . METHODS In this double-blind , dose-ranging study , 64 patients with idiopathic PAH or PAH associated with collagen vascular disease , anorexigen use , or human immunodeficiency virus infection were r and omized to receive 1 , 2.5 , 5 , or 10 mg of ambrisentan once daily for 12 weeks followed by 12 weeks of open-label ambrisentan . The primary end point was an improvement from baseline in 6-min walk distance ( 6MWD ) ; secondary end points included Borg dyspnea index , World Health Organization ( WHO ) functional class , a subject global assessment , and cardiopulmonary hemodynamics . RESULTS At 12 weeks , ambrisentan increased 6MWD ( + 36.1 m , p < 0.0001 ) with similar and statistically significant increases for each dose group ( range , + 33.9 to + 38.1 m ) . Improvements were also observed in Borg dyspnea index , WHO functional class , subject global assessment , mean pulmonary arterial pressure ( -5.2 mm Hg , p < 0.0001 ) , and cardiac index ( + 0.33 l/min/m2 , p < 0.0008 ) . Adverse events were mild and unrelated to dose , including the incidence of elevated serum aminotransferase concentrations > 3 times the upper limit of normal ( 3.1 % ) . CONCLUSIONS Ambrisentan appears to improve exercise capacity , symptoms , and hemodynamics in patients with PAH . The incidence and severity of liver enzyme abnormalities appear to be low", "UNLABELLED Pulmonary arterial hypertension ( PAH ) is characterized by functional and structural changes in the pulmonary vasculature , and despite the drug treatment that made significant progress , the prognosis of patients with advanced PH remains extremely poor . In the present study , we investigated the early effect of bone marrow mesenchymal stem cells ( BMSCs ) on experimental high blood flow-induced PAH model rats and discussed the mechanism . BMSCs were isolated , cultured from bone marrow of Sprague-Dawley ( SD ) rat . The animal model of PAH was created by surgical methods to produce a left-to-right shunt . Following the successful establishment of the PAH model , rats were r and omly assigned to three groups ( n=20 in each group ) : sham group ( control ) , PAH group , and BMSC group ( received a sublingual vein injection of 1 - 5 \\u00d7 10(6 ) BMSCs ) . Two weeks after the administration , BMSCs significantly reduced the vascular remodeling , improved the hemodynamic data , and deceased the right ventricle weight ratio to left ventricular plus septal weight ( RV/LV+S ) ( P<0.05 ) . Real-time reverse transcription-polymerase chain reaction ( RT-PCR ) and immunohistochemistry analysis results indicated that the inflammation factors such as interleukin-1\\u03b2 ( IL-1\\u03b2 ) , IL-6 , and tumor necrosis factor-\\u03b1 ( TNF-\\u03b1 ) were reduced ( P<0.05 ) ; the expression of matrix metallo proteinase-9 ( MMP-9 ) was lower ( P<0.05 ) ; vascular endothelial growth factor ( VEGF ) was higher in BMSC group than those in PAH group ( P<0.05 ) . CONCLUSION Sublingual vein injection of BMSCs for 2 weeks , significantly improved the lung and heart injury caused by left-to-right shunt-induced PAH ; decreased pulmonary vascular remodeling and inflammation ; and enhanced angiogenesis", "Pulmonary arterial hypertension is a life-threatening disease for which continuous intravenous prostacyclin has proven to be effective . However , this treatment requires a permanent central venous catheter with the associated risk of serious complications such as sepsis , thromboembolism , or syncope . Treprostinil , a stable prostacyclin analogue , can be administered by a continuous subcutaneous infusion , avoiding these risks . We conducted a 12-week , double-blind , placebo-controlled multicenter trial in 470 patients with pulmonary arterial hypertension , either primary or associated with connective tissue disease or congenital systemic-to-pulmonary shunts . Exercise capacity improved with treprostinil and was unchanged with placebo ; the between treatment group difference in median six-minute walking distance was 16 m ( p = 0.006 ) . Improvement in exercise capacity was greater in the sicker patients and was dose-related , but independent of disease etiology . Concomitantly , treprostinil significantly improved indices of dyspnea , signs and symptoms of pulmonary hypertension , and hemodynamics . The most common side effect attributed to treprostinil was infusion site pain ( 85 % ) leading to premature discontinuation from the study in 8 % of patients . Three patients in the treprostinil treatment group presented with an episode of gastrointestinal hemorrhage . We conclude that chronic subcutaneous infusion of treprostinil is an effective treatment with an acceptable safety profile in patients with pulmonary arterial hypertension", "BACKGROUND Endothelin 1 , a powerful endogenous vasoconstrictor and mitogen , might be a cause of pulmonary hypertension . We describe the efficacy and safety of bosentan , a dual endothelin-receptor antagonist that can be taken orally , in patients with severe pulmonary hypertension . METHODS In this double-blind , placebo-controlled study , 32 patients with pulmonary hypertension ( primary or associated with scleroderma ) were r and omly assigned to bosentan ( 62.5 mg taken twice daily for 4 weeks then 125 mg twice daily ) or placebo for a minimum of 12 weeks . The primary endpoint was change in exercise capacity . Secondary endpoints included changes in cardiopulmonary haemodynamics , Borg dyspnoea index , WHO functional class , and withdrawal due to clinical worsening . Analysis was by intention to treat . FINDINGS In patients given bosentan , the distance walked in 6 min improved by 70 m at 12 weeks compared with baseline , whereas it worsened by 6 m in those on placebo ( difference 76 m [ 95 % CI 12 - 139 ] , p=0.021 ) . The improvement was maintained for at least 20 weeks . The cardiac index was 1.0 L min(-1 ) m(-2 ) ( 95 % CI 0.6 - 1.4 , p<0.0001 ) greater in patients given bosentan than in those given placebo . Pulmonary vascular resistance decreased by 223 dyn s cm(-)(5 ) with bosentan , but increased by 191 dyn s cm(-5 ) with placebo ( difference -415 [ -608 to -221 ] , p=0.0002 ) . Patients given bosentan had a reduced Borg dyspnoea index and an improved WHO functional class . All three withdrawals from clinical worsening were in the placebo group ( p=0.033 ) . The number and nature of adverse events did not differ between the two groups . INTERPRETATION Bosentan increases exercise capacity and improves haemodynamics in patients with pulmonary hypertension , suggesting that endothelin has an important role in pulmonary hypertension", "Background Systematic Review s ( SRs ) of experimental animal studies are not yet common practice , but awareness of the merits of conducting such SRs is steadily increasing . As animal intervention studies differ from r and omized clinical trials ( RCT ) in many aspects , the methodology for SRs of clinical trials needs to be adapted and optimized for animal intervention studies . The Cochrane Collaboration developed a Risk of Bias ( RoB ) tool to establish consistency and avoid discrepancies in assessing the method ological quality of RCTs . A similar initiative is warranted in the field of animal experimentation . Methods We provide an RoB tool for animal intervention studies ( SYRCLE \\u2019s RoB tool ) . This tool is based on the Cochrane RoB tool and has been adjusted for aspects of bias that play a specific role in animal intervention studies . To enhance transparency and applicability , we formulated signalling questions to facilitate judgment . Results The result ing RoB tool for animal studies contains 10 entries . These entries are related to selection bias , performance bias , detection bias , attrition bias , reporting bias and other biases . Half these items are in agreement with the items in the Cochrane RoB tool . Most of the variations between the two tools are due to differences in design between RCTs and animal studies . Shortcomings in , or unfamiliarity with , specific aspects of experimental design of animal studies compared to clinical studies also play a role . Conclusions SYRCLE \\u2019s RoB tool is an adapted version of the Cochrane RoB tool . Widespread adoption and implementation of this tool will facilitate and improve critical appraisal of evidence from animal studies . This may subsequently enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the method ological quality of animal studies" ], "target": "Conclusions SC therapy is effective for PAH in pre clinical studies .\\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .", "background": "Background Despite significant progress in drug treatment , the prognosis of patients with advanced pulmonary arterial hypertension ( PAH ) remains extremely poor .\\nMany pre clinical studies have reported the efficacy of stem cell ( SC ) therapy for PAH ; however , this approach remains controversial .\\nThe aim of this systematic review and meta- analysis is to assess the potential efficacy of SC therapy for PAH .", "reviews_info": "Background Despite significant progress in drug treatment , the prognosis of patients with advanced pulmonary arterial hypertension ( PAH ) remains extremely poor .\\nMany pre clinical studies have reported the efficacy of stem cell ( SC ) therapy for PAH ; however , this approach remains controversial .\\nThe aim of this systematic review and meta- analysis is to assess the potential efficacy of SC therapy for PAH ." } ``` __Cochrane__ ```json { "review_id": "CD007697", "pmid": [ "16394043" ], "title": [ "Aggressive surgical effort and improved survival in advanced-stage ovarian cancer." ], "abstract": [ "Residual disease after initial surgery for ovarian cancer is the strongest prognostic factor for survival. However, the extent of surgical resection required to achieve optimal cytoreduction is controversial. Our goal was to estimate the effect of aggressive surgical resection on ovarian cancer patient survival.\\n A retrospective cohort study of consecutive patients with International Federation of Gynecology and Obstetrics stage IIIC ovarian cancer undergoing primary surgery was conducted between January 1, 1994, and December 31, 1998. The main outcome measures were residual disease after cytoreduction, frequency of radical surgical resection, and 5-year disease-specific survival.\\n The study comprised 194 patients, including 144 with carcinomatosis. The mean patient age and follow-up time were 64.4 and 3.5 years, respectively. After surgery, 131 (67.5%) of the 194 patients had less than 1 cm of residual disease (definition of optimal cytoreduction). Considering all patients, residual disease was the only independent predictor of survival; the need to perform radical procedures to achieve optimal cytoreduction was not associated with a decrease in survival. For the subgroup of patients with carcinomatosis, residual disease and the performance of radical surgical procedures were the only independent predictors. Disease-specific survival was markedly improved for patients with carcinomatosis operated on by surgeons who most frequently used radical procedures compared with those least likely to use radical procedures (44% versus 17%, P < .001).\\n Overall, residual disease was the only independent predictor of survival. Minimizing residual disease through aggressive surgical resection was beneficial, especially in patients with carcinomatosis.\\n II-2." ], "target": "We found only low quality evidence comparing ultra-radical and standard surgery in women with advanced ovarian cancer and carcinomatosis. The evidence suggested that ultra-radical surgery may result in better survival.\\u00a0 It was unclear whether there were any differences in progression-free survival, QoL and morbidity between the two groups. The cost-effectiveness of this intervention has not been investigated. We are, therefore, unable to reach definite conclusions about the relative benefits and adverse effects of the two types of surgery.\\nIn order to determine the role of ultra-radical surgery in the management of advanced stage ovarian cancer, a sufficiently powered randomised controlled trial comparing ultra-radical and standard surgery or well-designed non-randomised studies would be required." } ``` ### Data Fields __MS^2__ - `"review_id"`: The PubMed ID of the review. - `"pmid"`: The PubMed IDs of the included studies. - `"title"`: The titles of the included studies. - `"abstract"`: The abstracts of the included studies. - `"target"`: The conclusions, taken from the abstract of the review, that serve as the summarization target. - `"background"`: A description of the reviews objective. __Cochrane__ - `"review_id"`: The PubMed ID of the review. - `"pmid"`: The PubMed IDs of the included studies. - `"title"`: The titles of the included studies. - `"abstract"`: The abstracts of the included studies. - `"target"`: The conclusions, taken from the abstract of the review, that serve as the summarization target. ### Data Splits Each dataset is split into training, validation and test partitions __MS^2__ | train | validation | test | |------:|-----------:|-----:| | 14188 | 2021 | 1667 | __Cochrane__ | train | validation | test | |------:|-----------:|-----:| | 3752 | 470 | 470 | ## Dataset Creation Please refer to the following papers for details about dataset curation: [MSˆ2: A Dataset for Multi-Document Summarization of Medical Studies](https://aclanthology.org/2021.emnlp-main.594.pdf) [Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8378607/) ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Licensing information can be found [here](https://github.com/allenai/mslr-shared-task/blob/main/LICENSE). ### Citation Information **DeYoung, Jay, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl and Lucy Lu Wang. "MS2: A Dataset for Multi-Document Summarization of Medical Studies." EMNLP (2021).** ```bibtex @inproceedings{DeYoung2021MS2MS, title={MSˆ2: Multi-Document Summarization of Medical Studies}, author={Jay DeYoung and Iz Beltagy and Madeleine van Zuylen and Bailey Kuehl and Lucy Lu Wang}, booktitle={EMNLP}, year={2021} } ``` **Byron C. Wallace, Sayantani Saha, Frank Soboczenski, and Iain James Marshall. (2020). "Generating (factual?) narrative summaries of RCTs: Experiments with neural multi-document summarization." AMIA Annual Symposium.** ```bibtex @article{Wallace2020GeneratingN, title={Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization}, author={Byron C. Wallace and Sayantani Saha and Frank Soboczenski and Iain James Marshall}, journal={AMIA Annual Symposium}, year={2020}, volume={abs/2008.11293} } ```
allenai/mslr2022
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "language:en", "license:apache-2.0", "region:us" ]
2022-07-18T15:24:24+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
2022-11-18T21:16:10+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us
Dataset Card for MSLR2022 ========================= Table of Contents ----------------- * Dataset Card for MSLR2022 + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL ### Dataset Summary The Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical evidence from different clinical studies are summarized in literature reviews. Reviews provide the highest quality of evidence for clinical care, but are expensive to produce manually. (Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor. The MSLR shared task uses two datasets to assess the current state of multidocument summarization for this task, and to encourage the development of modeling contributions, scaffolding tasks, methods for model interpretability, and improved automated evaluation methods in this domain. ### Supported Tasks and Leaderboards This dataset is used for the MSLR2022 Shared Task. For information on the shared task leaderboard, please refer here. ### Languages English Dataset Structure ----------------- More information on dataset structure here. ### Data Instances **MS^2** **Cochrane** ### Data Fields **MS^2** * '"review\_id"': The PubMed ID of the review. * '"pmid"': The PubMed IDs of the included studies. * '"title"': The titles of the included studies. * '"abstract"': The abstracts of the included studies. * '"target"': The conclusions, taken from the abstract of the review, that serve as the summarization target. * '"background"': A description of the reviews objective. **Cochrane** * '"review\_id"': The PubMed ID of the review. * '"pmid"': The PubMed IDs of the included studies. * '"title"': The titles of the included studies. * '"abstract"': The abstracts of the included studies. * '"target"': The conclusions, taken from the abstract of the review, that serve as the summarization target. ### Data Splits Each dataset is split into training, validation and test partitions **MS^2** **Cochrane** Dataset Creation ---------------- Please refer to the following papers for details about dataset curation: MSˆ2: A Dataset for Multi-Document Summarization of Medical Studies Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Licensing information can be found here. DeYoung, Jay, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl and Lucy Lu Wang. "MS2: A Dataset for Multi-Document Summarization of Medical Studies." EMNLP (2021). Byron C. Wallace, Sayantani Saha, Frank Soboczenski, and Iain James Marshall. (2020). "Generating (factual?) narrative summaries of RCTs: Experiments with neural multi-document summarization." AMIA Annual Symposium.
[ "### Dataset Summary\n\n\nThe Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical evidence from different clinical studies are summarized in literature reviews. Reviews provide the highest quality of evidence for clinical care, but are expensive to produce manually. (Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor. The MSLR shared task uses two datasets to assess the current state of multidocument summarization for this task, and to encourage the development of modeling contributions, scaffolding tasks, methods for model interpretability, and improved automated evaluation methods in this domain.", "### Supported Tasks and Leaderboards\n\n\nThis dataset is used for the MSLR2022 Shared Task. For information on the shared task leaderboard, please refer here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nMore information on dataset structure here.", "### Data Instances\n\n\n**MS^2**\n\n\n**Cochrane**", "### Data Fields\n\n\n**MS^2**\n\n\n* '\"review\\_id\"': The PubMed ID of the review.\n* '\"pmid\"': The PubMed IDs of the included studies.\n* '\"title\"': The titles of the included studies.\n* '\"abstract\"': The abstracts of the included studies.\n* '\"target\"': The conclusions, taken from the abstract of the review, that serve as the summarization target.\n* '\"background\"': A description of the reviews objective.\n\n\n**Cochrane**\n\n\n* '\"review\\_id\"': The PubMed ID of the review.\n* '\"pmid\"': The PubMed IDs of the included studies.\n* '\"title\"': The titles of the included studies.\n* '\"abstract\"': The abstracts of the included studies.\n* '\"target\"': The conclusions, taken from the abstract of the review, that serve as the summarization target.", "### Data Splits\n\n\nEach dataset is split into training, validation and test partitions\n\n\n**MS^2**\n\n\n\n**Cochrane**\n\n\n\nDataset Creation\n----------------\n\n\nPlease refer to the following papers for details about dataset curation:\n\n\nMSˆ2: A Dataset for Multi-Document Summarization of Medical Studies\n\n\nGenerating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nLicensing information can be found here.\n\n\nDeYoung, Jay, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl and Lucy Lu Wang. \"MS2: A Dataset for Multi-Document Summarization of Medical Studies.\" EMNLP (2021).\n\n\nByron C. Wallace, Sayantani Saha, Frank Soboczenski, and Iain James Marshall. (2020). \"Generating (factual?) narrative summaries of RCTs: Experiments with neural multi-document summarization.\" AMIA Annual Symposium." ]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n", "### Dataset Summary\n\n\nThe Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical evidence from different clinical studies are summarized in literature reviews. Reviews provide the highest quality of evidence for clinical care, but are expensive to produce manually. (Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor. The MSLR shared task uses two datasets to assess the current state of multidocument summarization for this task, and to encourage the development of modeling contributions, scaffolding tasks, methods for model interpretability, and improved automated evaluation methods in this domain.", "### Supported Tasks and Leaderboards\n\n\nThis dataset is used for the MSLR2022 Shared Task. For information on the shared task leaderboard, please refer here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nMore information on dataset structure here.", "### Data Instances\n\n\n**MS^2**\n\n\n**Cochrane**", "### Data Fields\n\n\n**MS^2**\n\n\n* '\"review\\_id\"': The PubMed ID of the review.\n* '\"pmid\"': The PubMed IDs of the included studies.\n* '\"title\"': The titles of the included studies.\n* '\"abstract\"': The abstracts of the included studies.\n* '\"target\"': The conclusions, taken from the abstract of the review, that serve as the summarization target.\n* '\"background\"': A description of the reviews objective.\n\n\n**Cochrane**\n\n\n* '\"review\\_id\"': The PubMed ID of the review.\n* '\"pmid\"': The PubMed IDs of the included studies.\n* '\"title\"': The titles of the included studies.\n* '\"abstract\"': The abstracts of the included studies.\n* '\"target\"': The conclusions, taken from the abstract of the review, that serve as the summarization target.", "### Data Splits\n\n\nEach dataset is split into training, validation and test partitions\n\n\n**MS^2**\n\n\n\n**Cochrane**\n\n\n\nDataset Creation\n----------------\n\n\nPlease refer to the following papers for details about dataset curation:\n\n\nMSˆ2: A Dataset for Multi-Document Summarization of Medical Studies\n\n\nGenerating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nLicensing information can be found here.\n\n\nDeYoung, Jay, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl and Lucy Lu Wang. \"MS2: A Dataset for Multi-Document Summarization of Medical Studies.\" EMNLP (2021).\n\n\nByron C. Wallace, Sayantani Saha, Frank Soboczenski, and Iain James Marshall. (2020). \"Generating (factual?) narrative summaries of RCTs: Experiments with neural multi-document summarization.\" AMIA Annual Symposium." ]
[ 117, 149, 38, 20, 16, 224, 95, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 131 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n### Dataset Summary\n\n\nThe Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical evidence from different clinical studies are summarized in literature reviews. Reviews provide the highest quality of evidence for clinical care, but are expensive to produce manually. (Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor. The MSLR shared task uses two datasets to assess the current state of multidocument summarization for this task, and to encourage the development of modeling contributions, scaffolding tasks, methods for model interpretability, and improved automated evaluation methods in this domain.### Supported Tasks and Leaderboards\n\n\nThis dataset is used for the MSLR2022 Shared Task. For information on the shared task leaderboard, please refer here.### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nMore information on dataset structure here.### Data Instances\n\n\n**MS^2**\n\n\n**Cochrane**" ]
db53cfec44e55e89ad01a01e1e75e5619d7be909
# Dataset Card for yaakov/wikipedia-de-splits ## Dataset Description The only goal of this dataset is to have random German Wikipedia articles at various dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements. For this purpose, I loaded the 2665357 articles in the `test` set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes `2**n`: `1, 2, 4, 8, ...`. The split names are strings. The split `'all'` contains all 2665357 available articles. ## Dataset creation This dataset has been created with the following script: !apt install git-lfs !pip install -q transformers datasets from huggingface_hub import notebook_login notebook_login() from datasets import load_dataset wikipedia_de = load_dataset("wikipedia", "20220301.de")['train'] shuffled = wikipedia_de.shuffle(seed=42) from datasets import DatasetDict res = DatasetDict() k, n = 0, 1 while n <= shuffled.num_rows: res[str(k)] = shuffled.select(range(n)) k += 1; n *= 2 res['all'] = shuffled res.push_to_hub('yaakov/wikipedia-de-splits')
yaakov/wikipedia-de-splits
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:wikipedia", "language:de", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2022-07-18T15:50:25+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["de"], "license": ["cc-by-sa-3.0", "gfdl"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "source_datasets": ["wikipedia"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "wikipedia-de-splits", "configs": ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "all"]}
2022-07-18T17:28:34+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-wikipedia #language-German #license-cc-by-sa-3.0 #license-gfdl #region-us
# Dataset Card for yaakov/wikipedia-de-splits ## Dataset Description The only goal of this dataset is to have random German Wikipedia articles at various dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements. For this purpose, I loaded the 2665357 articles in the 'test' set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes '2n': '1, 2, 4, 8, ...'. The split names are strings. The split ''all'' contains all 2665357 available articles. ## Dataset creation This dataset has been created with the following script: !apt install git-lfs !pip install -q transformers datasets from huggingface_hub import notebook_login notebook_login() from datasets import load_dataset wikipedia_de = load_dataset("wikipedia", "URL")['train'] shuffled = wikipedia_de.shuffle(seed=42) from datasets import DatasetDict res = DatasetDict() k, n = 0, 1 while n <= shuffled.num_rows: res[str(k)] = URL(range(n)) k += 1; n *= 2 res['all'] = shuffled res.push_to_hub('yaakov/wikipedia-de-splits')
[ "# Dataset Card for yaakov/wikipedia-de-splits", "## Dataset Description\nThe only goal of this dataset is to have random German Wikipedia articles at\nvarious dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements.\n\nFor this purpose, I loaded the 2665357 articles in the 'test' set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes '2n': '1, 2, 4, 8, ...'. The split names are strings. The split ''all'' contains all 2665357 available articles.", "## Dataset creation\nThis dataset has been created with the following script:\n\n !apt install git-lfs\n !pip install -q transformers datasets\n \n from huggingface_hub import notebook_login\n notebook_login()\n \n from datasets import load_dataset\n wikipedia_de = load_dataset(\"wikipedia\", \"URL\")['train']\n \n shuffled = wikipedia_de.shuffle(seed=42)\n \n from datasets import DatasetDict\n res = DatasetDict()\n \n k, n = 0, 1\n while n <= shuffled.num_rows:\n res[str(k)] = URL(range(n))\n k += 1; n *= 2\n res['all'] = shuffled\n \n res.push_to_hub('yaakov/wikipedia-de-splits')" ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-wikipedia #language-German #license-cc-by-sa-3.0 #license-gfdl #region-us \n", "# Dataset Card for yaakov/wikipedia-de-splits", "## Dataset Description\nThe only goal of this dataset is to have random German Wikipedia articles at\nvarious dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements.\n\nFor this purpose, I loaded the 2665357 articles in the 'test' set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes '2n': '1, 2, 4, 8, ...'. The split names are strings. The split ''all'' contains all 2665357 available articles.", "## Dataset creation\nThis dataset has been created with the following script:\n\n !apt install git-lfs\n !pip install -q transformers datasets\n \n from huggingface_hub import notebook_login\n notebook_login()\n \n from datasets import load_dataset\n wikipedia_de = load_dataset(\"wikipedia\", \"URL\")['train']\n \n shuffled = wikipedia_de.shuffle(seed=42)\n \n from datasets import DatasetDict\n res = DatasetDict()\n \n k, n = 0, 1\n while n <= shuffled.num_rows:\n res[str(k)] = URL(range(n))\n k += 1; n *= 2\n res['all'] = shuffled\n \n res.push_to_hub('yaakov/wikipedia-de-splits')" ]
[ 163, 16, 129, 190 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-wikipedia #language-German #license-cc-by-sa-3.0 #license-gfdl #region-us \n# Dataset Card for yaakov/wikipedia-de-splits## Dataset Description\nThe only goal of this dataset is to have random German Wikipedia articles at\nvarious dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements.\n\nFor this purpose, I loaded the 2665357 articles in the 'test' set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes '2n': '1, 2, 4, 8, ...'. The split names are strings. The split ''all'' contains all 2665357 available articles.## Dataset creation\nThis dataset has been created with the following script:\n\n !apt install git-lfs\n !pip install -q transformers datasets\n \n from huggingface_hub import notebook_login\n notebook_login()\n \n from datasets import load_dataset\n wikipedia_de = load_dataset(\"wikipedia\", \"URL\")['train']\n \n shuffled = wikipedia_de.shuffle(seed=42)\n \n from datasets import DatasetDict\n res = DatasetDict()\n \n k, n = 0, 1\n while n <= shuffled.num_rows:\n res[str(k)] = URL(range(n))\n k += 1; n *= 2\n res['all'] = shuffled\n \n res.push_to_hub('yaakov/wikipedia-de-splits')" ]
9bdb7aefc0244fafa68e2ea3543d5068335296e1
# Dataset Card for "relbert/semeval2012_relational_similarity" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/) - **Dataset:** SemEval2012: Relational Similarity ### Dataset Summary Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model. The dataset contains a list of positive and negative word pair from 89 pre-defined relations. The relation types are constructed on top of following 10 parent relation types. ```shell { 1: "Class Inclusion", # Hypernym 2: "Part-Whole", # Meronym, Substance Meronym 3: "Similar", # Synonym, Co-hypornym 4: "Contrast", # Antonym 5: "Attribute", # Attribute, Event 6: "Non Attribute", 7: "Case Relation", 8: "Cause-Purpose", 9: "Space-Time", 10: "Representation" } ``` Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw). ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { 'relation_type': '8d', 'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ] 'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ] } ``` ### Data Splits | name |train|validation| |---------|----:|---------:| |semeval2012_relational_similarity| 89 | 89| ### Number of Positive/Negative Word-pairs in each Split | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) | |:----------------|-------------------:|-------------------:|------------------------:|------------------------:| | 1 | 50 | 740 | 63 | 826 | | 10 | 60 | 730 | 66 | 823 | | 10a | 10 | 799 | 14 | 894 | | 10b | 10 | 797 | 13 | 893 | | 10c | 10 | 800 | 11 | 898 | | 10d | 10 | 799 | 10 | 898 | | 10e | 10 | 795 | 8 | 896 | | 10f | 10 | 799 | 10 | 898 | | 1a | 10 | 797 | 14 | 892 | | 1b | 10 | 797 | 14 | 892 | | 1c | 10 | 800 | 11 | 898 | | 1d | 10 | 797 | 16 | 890 | | 1e | 10 | 794 | 8 | 895 | | 2 | 100 | 690 | 117 | 772 | | 2a | 10 | 799 | 15 | 893 | | 2b | 10 | 796 | 11 | 894 | | 2c | 10 | 798 | 13 | 894 | | 2d | 10 | 798 | 10 | 897 | | 2e | 10 | 799 | 11 | 897 | | 2f | 10 | 802 | 11 | 900 | | 2g | 10 | 796 | 16 | 889 | | 2h | 10 | 799 | 11 | 897 | | 2i | 10 | 800 | 9 | 900 | | 2j | 10 | 801 | 10 | 900 | | 3 | 80 | 710 | 80 | 809 | | 3a | 10 | 799 | 11 | 897 | | 3b | 10 | 802 | 11 | 900 | | 3c | 10 | 798 | 12 | 895 | | 3d | 10 | 798 | 14 | 893 | | 3e | 10 | 802 | 5 | 906 | | 3f | 10 | 803 | 11 | 901 | | 3g | 10 | 801 | 6 | 904 | | 3h | 10 | 801 | 10 | 900 | | 4 | 80 | 710 | 82 | 807 | | 4a | 10 | 802 | 11 | 900 | | 4b | 10 | 797 | 7 | 899 | | 4c | 10 | 800 | 12 | 897 | | 4d | 10 | 796 | 4 | 901 | | 4e | 10 | 802 | 12 | 899 | | 4f | 10 | 802 | 9 | 902 | | 4g | 10 | 798 | 15 | 892 | | 4h | 10 | 801 | 12 | 898 | | 5 | 90 | 700 | 105 | 784 | | 5a | 10 | 798 | 14 | 893 | | 5b | 10 | 801 | 8 | 902 | | 5c | 10 | 799 | 11 | 897 | | 5d | 10 | 797 | 15 | 891 | | 5e | 10 | 801 | 8 | 902 | | 5f | 10 | 801 | 11 | 899 | | 5g | 10 | 802 | 9 | 902 | | 5h | 10 | 800 | 15 | 894 | | 5i | 10 | 800 | 14 | 895 | | 6 | 80 | 710 | 99 | 790 | | 6a | 10 | 798 | 15 | 892 | | 6b | 10 | 801 | 11 | 899 | | 6c | 10 | 801 | 13 | 897 | | 6d | 10 | 804 | 10 | 903 | | 6e | 10 | 801 | 11 | 899 | | 6f | 10 | 799 | 12 | 896 | | 6g | 10 | 798 | 12 | 895 | | 6h | 10 | 799 | 15 | 893 | | 7 | 80 | 710 | 91 | 798 | | 7a | 10 | 800 | 14 | 895 | | 7b | 10 | 796 | 7 | 898 | | 7c | 10 | 797 | 11 | 895 | | 7d | 10 | 800 | 14 | 895 | | 7e | 10 | 797 | 10 | 896 | | 7f | 10 | 796 | 12 | 893 | | 7g | 10 | 794 | 9 | 894 | | 7h | 10 | 795 | 14 | 890 | | 8 | 80 | 710 | 90 | 799 | | 8a | 10 | 797 | 14 | 892 | | 8b | 10 | 801 | 7 | 903 | | 8c | 10 | 796 | 12 | 893 | | 8d | 10 | 796 | 13 | 892 | | 8e | 10 | 796 | 11 | 894 | | 8f | 10 | 797 | 12 | 894 | | 8g | 10 | 793 | 7 | 895 | | 8h | 10 | 798 | 14 | 893 | | 9 | 90 | 700 | 96 | 793 | | 9a | 10 | 795 | 14 | 890 | | 9b | 10 | 799 | 12 | 896 | | 9c | 10 | 790 | 7 | 892 | | 9d | 10 | 803 | 9 | 903 | | 9e | 10 | 804 | 8 | 905 | | 9f | 10 | 799 | 10 | 898 | | 9g | 10 | 796 | 14 | 891 | | 9h | 10 | 799 | 13 | 895 | | 9i | 10 | 799 | 9 | 899 | | SUM | 1580 | 70207 | 1778 | 78820 | ### Citation Information ``` @inproceedings{jurgens-etal-2012-semeval, title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity", author = "Jurgens, David and Mohammad, Saif and Turney, Peter and Holyoak, Keith", booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)", month = "7-8 " # jun, year = "2012", address = "Montr{\'e}al, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S12-1047", pages = "356--364", } ```
research-backup/semeval2012_relational_similarity
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2022-07-18T16:59:33+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "pretty_name": "SemEval2012 task 2 Relational Similarity"}
2022-07-20T17:56:37+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us
Dataset Card for "relbert/semeval2012\_relational\_similarity" ============================================================== Dataset Description ------------------- * Repository: RelBERT * Paper: URL * Dataset: SemEval2012: Relational Similarity ### Dataset Summary Relational similarity dataset from SemEval2012 task 2, compiled to fine-tune RelBERT model. The dataset contains a list of positive and negative word pair from 89 pre-defined relations. The relation types are constructed on top of following 10 parent relation types. Each of the parent relation is further grouped into child relation types where the definition can be found here. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Splits ### Number of Positive/Negative Word-pairs in each Split
[ "### Dataset Summary\n\n\nRelational similarity dataset from SemEval2012 task 2, compiled to fine-tune RelBERT model.\nThe dataset contains a list of positive and negative word pair from 89 pre-defined relations.\nThe relation types are constructed on top of following 10 parent relation types.\n\n\nEach of the parent relation is further grouped into child relation types where the definition can be found here.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Splits", "### Number of Positive/Negative Word-pairs in each Split" ]
[ "TAGS\n#multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nRelational similarity dataset from SemEval2012 task 2, compiled to fine-tune RelBERT model.\nThe dataset contains a list of positive and negative word pair from 89 pre-defined relations.\nThe relation types are constructed on top of following 10 parent relation types.\n\n\nEach of the parent relation is further grouped into child relation types where the definition can be found here.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Splits", "### Number of Positive/Negative Word-pairs in each Split" ]
[ 35, 98, 18, 5, 17 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n### Dataset Summary\n\n\nRelational similarity dataset from SemEval2012 task 2, compiled to fine-tune RelBERT model.\nThe dataset contains a list of positive and negative word pair from 89 pre-defined relations.\nThe relation types are constructed on top of following 10 parent relation types.\n\n\nEach of the parent relation is further grouped into child relation types where the definition can be found here.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Splits### Number of Positive/Negative Word-pairs in each Split" ]
d607d2b6dbe4cf86623fa542bc6d696e10ec3799
# Dataset Card for "relbert/analogy_questions" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/) - **Dataset:** Analogy Questions ### Dataset Summary This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/). - original analogy questions | name | Size (valid/test) | Num of choice | Num of relation group | Original Reference | |-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:| | `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) | | `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) | | `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) | | `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) | - extra analogy questions | name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference | |:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------| | `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) | | `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) | | `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) | | `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) | | `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) | ## Dataset Structure ### Data Instances An example of `test` looks as follows. ``` { "stem": ["raphael", "painter"], "answer": 2, "choice": [["andersen", "plato"], ["reading", "berkshire"], ["marx", "philosopher"], ["tolstoi", "edison"]] } ``` The `stem` is the query word pair, `choice` has word pair candidates, and `answer` indicates the index of correct candidate which starts from `0`. All data is lowercased except Google dataset. ### Citation Information ``` @inproceedings{ushio-etal-2021-bert-is, title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?}, author={Ushio, Asahi and Espinosa-Anke, Luis and Schockaert, Steven and Camacho-Collados, Jose}, booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference}, year={2021}, publisher={Association for Computational Linguistics} } ``` ### LICENSE The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
relbert/analogy_questions
[ "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:other", "region:us" ]
2022-07-18T17:01:16+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "Analogy Question"}
2023-05-16T19:24:12+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us
Dataset Card for "relbert/analogy\_questions" ============================================= Dataset Description ------------------- * Repository: RelBERT * Paper: URL * Dataset: Analogy Questions ### Dataset Summary This dataset contains 5 different word analogy questions used in Analogy Language Model. * original analogy questions * extra analogy questions Dataset Structure ----------------- ### Data Instances An example of 'test' looks as follows. The 'stem' is the query word pair, 'choice' has word pair candidates, and 'answer' indicates the index of correct candidate which starts from '0'. All data is lowercased except Google dataset. ### LICENSE The LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
[ "### Dataset Summary\n\n\nThis dataset contains 5 different word analogy questions used in Analogy Language Model.\n\n\n* original analogy questions\n\n\n\n* extra analogy questions\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'test' looks as follows.\n\n\nThe 'stem' is the query word pair, 'choice' has word pair candidates,\nand 'answer' indicates the index of correct candidate which starts from '0'.\nAll data is lowercased except Google dataset.", "### LICENSE\n\n\nThe LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use." ]
[ "TAGS\n#multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nThis dataset contains 5 different word analogy questions used in Analogy Language Model.\n\n\n* original analogy questions\n\n\n\n* extra analogy questions\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'test' looks as follows.\n\n\nThe 'stem' is the query word pair, 'choice' has word pair candidates,\nand 'answer' indicates the index of correct candidate which starts from '0'.\nAll data is lowercased except Google dataset.", "### LICENSE\n\n\nThe LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use." ]
[ 33, 42, 70, 45 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us \n### Dataset Summary\n\n\nThis dataset contains 5 different word analogy questions used in Analogy Language Model.\n\n\n* original analogy questions\n\n\n\n* extra analogy questions\n\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'test' looks as follows.\n\n\nThe 'stem' is the query word pair, 'choice' has word pair candidates,\nand 'answer' indicates the index of correct candidate which starts from '0'.\nAll data is lowercased except Google dataset.### LICENSE\n\n\nThe LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use." ]
d81b8291e5998f5726ab7f35a0a557e761532aac
# Dataset Card for Mostly Basic Python Problems (mbpp) ## Table of Contents - [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp)) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/google-research/google-research/tree/master/mbpp - **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732) ### Dataset Summary The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732). ### Supported Tasks and Leaderboards This dataset is used to evaluate code generations. ### Languages English - Python code ## Dataset Structure ```python dataset_full = load_dataset("mbpp") DatasetDict({ test: Dataset({ features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'], num_rows: 974 }) }) dataset_sanitized = load_dataset("mbpp", "sanitized") DatasetDict({ test: Dataset({ features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'], num_rows: 427 }) }) ``` ### Data Instances #### mbpp - full ``` { 'task_id': 1, 'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].', 'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]', 'test_list': [ 'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8', 'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12', 'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'], 'test_setup_code': '', 'challenge_test_list': [] } ``` #### mbpp - sanitized ``` { 'source_file': 'Benchmark Questions Verification V2.ipynb', 'task_id': 2, 'prompt': 'Write a function to find the shared elements from the given two lists.', 'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ', 'test_imports': [], 'test_list': [ 'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))', 'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))', 'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))' ] } ``` ### Data Fields - `source_file`: unknown - `text`/`prompt`: description of programming task - `code`: solution for programming task - `test_setup_code`/`test_imports`: necessary code imports to execute tests - `test_list`: list of tests to verify solution - `challenge_test_list`: list of more challenging test to further probe solution ### Data Splits There are two version of the dataset (full and sanitized) which only one split each (test). ## Dataset Creation See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732). ### Curation Rationale In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides. ### Source Data #### Initial Data Collection and Normalization The dataset was manually created from scratch. #### Who are the source language producers? The dataset was created with an internal crowdsourcing effort at Google. ### Annotations #### Annotation process The full dataset was created first and a subset then underwent a second round to improve the task descriptions. #### Who are the annotators? The dataset was created with an internal crowdsourcing effort at Google. ### Personal and Sensitive Information None. ## Considerations for Using the Data Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful. ### Social Impact of Dataset With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models. ### Discussion of Biases ### Other Known Limitations Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset. ## Additional Information ### Dataset Curators Google Research ### Licensing Information CC-BY-4.0 ### Citation Information ``` @article{austin2021program, title={Program Synthesis with Large Language Models}, author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others}, journal={arXiv preprint arXiv:2108.07732}, year={2021} ``` ### Contributions Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
Muennighoff/mbpp
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-4.0", "code-generation", "arxiv:2108.07732", "region:us" ]
2022-07-18T18:05:21+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "Mostly Basic Python Problems", "tags": ["code-generation"]}
2022-10-20T18:43:58+00:00
[ "2108.07732" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-4.0 #code-generation #arxiv-2108.07732 #region-us
# Dataset Card for Mostly Basic Python Problems (mbpp) ## Table of Contents - Dataset Card for Mostly Basic Python Problems (mbpp)) - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Initial Data Collection and Normalization - Who are the source language producers? - Annotations - Annotation process - Who are the annotators? - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Paper: Program Synthesis with Large Language Models ### Dataset Summary The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. Released here as part of Program Synthesis with Large Language Models, Austin et. al., 2021. ### Supported Tasks and Leaderboards This dataset is used to evaluate code generations. ### Languages English - Python code ## Dataset Structure ### Data Instances #### mbpp - full #### mbpp - sanitized ### Data Fields - 'source_file': unknown - 'text'/'prompt': description of programming task - 'code': solution for programming task - 'test_setup_code'/'test_imports': necessary code imports to execute tests - 'test_list': list of tests to verify solution - 'challenge_test_list': list of more challenging test to further probe solution ### Data Splits There are two version of the dataset (full and sanitized) which only one split each (test). ## Dataset Creation See section 2.1 of original paper. ### Curation Rationale In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides. ### Source Data #### Initial Data Collection and Normalization The dataset was manually created from scratch. #### Who are the source language producers? The dataset was created with an internal crowdsourcing effort at Google. ### Annotations #### Annotation process The full dataset was created first and a subset then underwent a second round to improve the task descriptions. #### Who are the annotators? The dataset was created with an internal crowdsourcing effort at Google. ### Personal and Sensitive Information None. ## Considerations for Using the Data Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful. ### Social Impact of Dataset With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models. ### Discussion of Biases ### Other Known Limitations Since the task descriptions might not be expressive enough to solve the task. The 'sanitized' split aims at addressing this issue by having a second round of annotators improve the dataset. ## Additional Information ### Dataset Curators Google Research ### Licensing Information CC-BY-4.0 ### Contributions Thanks to @lvwerra for adding this dataset.
[ "# Dataset Card for Mostly Basic Python Problems (mbpp)", "## Table of Contents\n- Dataset Card for Mostly Basic Python Problems (mbpp))\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Repository: URL\n- Paper: Program Synthesis with Large Language Models", "### Dataset Summary\nThe benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. \n\nReleased here as part of Program Synthesis with Large Language Models, Austin et. al., 2021.", "### Supported Tasks and Leaderboards\nThis dataset is used to evaluate code generations.", "### Languages\nEnglish - Python code", "## Dataset Structure", "### Data Instances", "#### mbpp - full", "#### mbpp - sanitized", "### Data Fields\n\n- 'source_file': unknown\n- 'text'/'prompt': description of programming task\n- 'code': solution for programming task\n- 'test_setup_code'/'test_imports': necessary code imports to execute tests\n- 'test_list': list of tests to verify solution\n- 'challenge_test_list': list of more challenging test to further probe solution", "### Data Splits\nThere are two version of the dataset (full and sanitized) which only one split each (test).", "## Dataset Creation\nSee section 2.1 of original paper.", "### Curation Rationale\nIn order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.", "### Source Data", "#### Initial Data Collection and Normalization\nThe dataset was manually created from scratch.", "#### Who are the source language producers?\nThe dataset was created with an internal crowdsourcing effort at Google.", "### Annotations", "#### Annotation process\nThe full dataset was created first and a subset then underwent a second round to improve the task descriptions.", "#### Who are the annotators?\nThe dataset was created with an internal crowdsourcing effort at Google.", "### Personal and Sensitive Information\nNone.", "## Considerations for Using the Data\nMake sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.", "### Social Impact of Dataset\nWith this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.", "### Discussion of Biases", "### Other Known Limitations\nSince the task descriptions might not be expressive enough to solve the task. The 'sanitized' split aims at addressing this issue by having a second round of annotators improve the dataset.", "## Additional Information", "### Dataset Curators\nGoogle Research", "### Licensing Information\nCC-BY-4.0", "### Contributions\nThanks to @lvwerra for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-4.0 #code-generation #arxiv-2108.07732 #region-us \n", "# Dataset Card for Mostly Basic Python Problems (mbpp)", "## Table of Contents\n- Dataset Card for Mostly Basic Python Problems (mbpp))\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Repository: URL\n- Paper: Program Synthesis with Large Language Models", "### Dataset Summary\nThe benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. \n\nReleased here as part of Program Synthesis with Large Language Models, Austin et. al., 2021.", "### Supported Tasks and Leaderboards\nThis dataset is used to evaluate code generations.", "### Languages\nEnglish - Python code", "## Dataset Structure", "### Data Instances", "#### mbpp - full", "#### mbpp - sanitized", "### Data Fields\n\n- 'source_file': unknown\n- 'text'/'prompt': description of programming task\n- 'code': solution for programming task\n- 'test_setup_code'/'test_imports': necessary code imports to execute tests\n- 'test_list': list of tests to verify solution\n- 'challenge_test_list': list of more challenging test to further probe solution", "### Data Splits\nThere are two version of the dataset (full and sanitized) which only one split each (test).", "## Dataset Creation\nSee section 2.1 of original paper.", "### Curation Rationale\nIn order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.", "### Source Data", "#### Initial Data Collection and Normalization\nThe dataset was manually created from scratch.", "#### Who are the source language producers?\nThe dataset was created with an internal crowdsourcing effort at Google.", "### Annotations", "#### Annotation process\nThe full dataset was created first and a subset then underwent a second round to improve the task descriptions.", "#### Who are the annotators?\nThe dataset was created with an internal crowdsourcing effort at Google.", "### Personal and Sensitive Information\nNone.", "## Considerations for Using the Data\nMake sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.", "### Social Impact of Dataset\nWith this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.", "### Discussion of Biases", "### Other Known Limitations\nSince the task descriptions might not be expressive enough to solve the task. The 'sanitized' split aims at addressing this issue by having a second round of annotators improve the dataset.", "## Additional Information", "### Dataset Curators\nGoogle Research", "### Licensing Information\nCC-BY-4.0", "### Contributions\nThanks to @lvwerra for adding this dataset." ]
[ 120, 15, 171, 22, 114, 22, 8, 6, 6, 6, 7, 101, 27, 12, 36, 4, 20, 24, 5, 29, 23, 11, 39, 34, 8, 50, 5, 8, 11, 17 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-4.0 #code-generation #arxiv-2108.07732 #region-us \n# Dataset Card for Mostly Basic Python Problems (mbpp)## Table of Contents\n- Dataset Card for Mostly Basic Python Problems (mbpp))\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n- Repository: URL\n- Paper: Program Synthesis with Large Language Models### Dataset Summary\nThe benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us. \n\nReleased here as part of Program Synthesis with Large Language Models, Austin et. al., 2021.### Supported Tasks and Leaderboards\nThis dataset is used to evaluate code generations.### Languages\nEnglish - Python code## Dataset Structure### Data Instances#### mbpp - full#### mbpp - sanitized" ]
c692cd0d633f0a920eb45833ec64f748b9e7ca72
# Description This dataset is a subset of [https://huggingface.co/datasets/librispeech_asr](LibriSpeech) that has been adversarially modified. It is designed to fool ASR models into predicting a target of our choosing instead of the correct output. ## Splits The dataset contains several splits. Each split consists of the same utterances, modified with different types and amount of noise. 3 noises have been used: * Adversarial noise of radius 0.04 (`adv_0.04` split) * Adversarial noise of radius 0.015 (`adv_0.015` split) * Adversarial noise of radius 0.015 combined with Room Impulse Response (RIR) noise (`adv_0.015_RIR` split) In addition we provide the original inputs (`natural` split) For each split we actually provide two text keys: `true_text` which is the original LibriSpeech label, i.e. the sentence one can actually hear when listening to the audio; and `target_text`, which is the target sentence of our adversarial attack. An ASR model that this dataset fools would get a low WER on `target_text` and a high WER on `true_text`. An ASR model robust to this dataset would get the opposite. ## Usage You should evaluate your model on this dataset as you would evaluate it on LibriSpeech. Here is an example with Wav2Vec2 ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_adv_eval = load_dataset("RaphaelOlivier/librispeech_asr_adversarial", "adv", split="adv_0.15_adv_txt") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_adv_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER on correct labels:", wer(result["true_text"], result["transcription"])) print("WER on attack targets:", wer(result["target_text"], result["transcription"])) ``` *Result (WER)*: | "0.015 target_text" | "0.015 true_text" | "0.04 target_text" | "0.04 true_text" |---|---|---|---| | 58.2 | 108 | 49.5 | 108 |
RaphaelOlivier/librispeech_asr_adversarial
[ "region:us" ]
2022-07-18T18:08:15+00:00
{}
2022-08-02T23:02:08+00:00
[]
[]
TAGS #region-us
Description =========== This dataset is a subset of URL that has been adversarially modified. It is designed to fool ASR models into predicting a target of our choosing instead of the correct output. Splits ------ The dataset contains several splits. Each split consists of the same utterances, modified with different types and amount of noise. 3 noises have been used: * Adversarial noise of radius 0.04 ('adv\_0.04' split) * Adversarial noise of radius 0.015 ('adv\_0.015' split) * Adversarial noise of radius 0.015 combined with Room Impulse Response (RIR) noise ('adv\_0.015\_RIR' split) In addition we provide the original inputs ('natural' split) For each split we actually provide two text keys: 'true\_text' which is the original LibriSpeech label, i.e. the sentence one can actually hear when listening to the audio; and 'target\_text', which is the target sentence of our adversarial attack. An ASR model that this dataset fools would get a low WER on 'target\_text' and a high WER on 'true\_text'. An ASR model robust to this dataset would get the opposite. Usage ----- You should evaluate your model on this dataset as you would evaluate it on LibriSpeech. Here is an example with Wav2Vec2 *Result (WER)*:
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
db08ee5f909bebfadfdee104a5653078574e8602
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: rajistics/finetuned-indian-food * Dataset: rajistics/indian_food_images * Config: rajistics--indian_food_images * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@@rajistics](https://huggingface.co/@rajistics) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-rajistics__indian_food_images-7f4d71b4-11165495
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T19:02:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["rajistics/indian_food_images"], "eval_info": {"task": "image_multi_class_classification", "model": "rajistics/finetuned-indian-food", "metrics": [], "dataset_name": "rajistics/indian_food_images", "dataset_config": "rajistics--indian_food_images", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-07-18T19:03:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: rajistics/finetuned-indian-food * Dataset: rajistics/indian_food_images * Config: rajistics--indian_food_images * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @@rajistics for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: rajistics/finetuned-indian-food\n* Dataset: rajistics/indian_food_images\n* Config: rajistics--indian_food_images\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @@rajistics for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: rajistics/finetuned-indian-food\n* Dataset: rajistics/indian_food_images\n* Config: rajistics--indian_food_images\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @@rajistics for evaluating this model." ]
[ 13, 110, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: rajistics/finetuned-indian-food\n* Dataset: rajistics/indian_food_images\n* Config: rajistics--indian_food_images\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @@rajistics for evaluating this model." ]
dac358f5f9e237b2670b04bf261c3c200326257d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-7d55fc88-11175496
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T19:08:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-07-19T05:04:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ 13, 105, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
24b7cc19e0ca633cccf49ad39a42e8feca1ac4d1
# Dataset Card for lampeter_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/3193 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Josef Schmied, Claudia Claridge, Rainer Siemund ### Dataset Summary The Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740,  a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous ### Supported Tasks and Leaderboards - `text-classification`: This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification. ### Languages The text in the dataset is British English. The associated BCP-47 code is `en-GB` ## Dataset Structure ### Data Instances A typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are `date`, which is the year of publication and `genre` which classifies the text into one of six broad areas. ``` { 'id': 'SciB1735', 'text': '\nI. WHEN I read your Defence of the British Mathematicians, I could not, Sir, but admire your Courage in asserting with such undoubting Assurance things so easily disproved. This to me seemed unaccountable, till I reflected on what you say (p. 32.) when upon my having appealed to every thinking Reader, whether it be possible to frame any clear Conception of Fluxions, you express yourself in the following manner, "Pray, Sir, who are those thinking Readers you ap\npeal to? Are they Geometricians, or Persons wholly ignorant of Geometry? If the former, I leave it to them: If the latter, I ask how well are they qualified to judge of the Method of Fluxions"? It must be acknowledged you seem by this Dilemma secure in the favour of one Part of your Readers, and the ignorance of the other. I am nevertheless persuaded there are fair and candid Men among the Mathematicians. And for those who are not Mathematicians, I shall endeavour so to unveil this Mystery, [TRUNCATED]', 'date': '1735', 'genre': 'Science', ' head': 'A DEFENCE OF FREE-THINKING IN Mathematics; &c.\n', 'title': 'A defence of free-thinking in mathematics [...]' } ``` ### Data Fields The dataset contains the following fields: - `id`: Unique identifier("string"), - `text`: ext in the document("string"), - `date`: Date of publication("date64"), - `genre`: Broad classification("string"), - `head`: Often same as title. Can be missing("string"), - `title`: Title of document("string") ### Data Splits Train: 120 ## Dataset Creation ### Curation Rationale The period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria. ### Source Data #### Initial Data Collection and Normalization The original data is selected according to the following criteria: - Complete texts only, including dedications, prefaces, postscripts, etc. - Texts are of varying length, ranging from c. 3,000 to c. 20,000 words. - Each author appears only once to avoid idiosyncratic language use. - Major literary figures of the time were excluded since their writing style can be studied in other, existing collections. - Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language. #### Who are the source language producers? Authors of texts between 1640-1740 ### Annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset N/A ### Discussion of Biases The social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset ### Other Known Limitations None ## Additional Information ### Dataset Curators Josef Schmied, Claudia Claridge, Rainer Siemund ### Licensing Information Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) ### Citation Information University of Oxford, The Lampeter Corpus of Early Modern English Tracts, Oxford Text Archive, http://hdl.handle.net/20.500.12024/3193.
biglam/lampeter_corpus
[ "task_categories:text-classification", "task_ids:multi-label-classification", "task_ids:multi-class-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-07-18T20:33:13+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification", "multi-class-classification"], "pretty_name": "Lampeter Corpus"}
2022-09-15T14:52:46+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #task_ids-multi-class-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us
# Dataset Card for lampeter_corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: Josef Schmied, Claudia Claridge, Rainer Siemund ### Dataset Summary The Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740,  a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous ### Supported Tasks and Leaderboards - 'text-classification': This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification. ### Languages The text in the dataset is British English. The associated BCP-47 code is 'en-GB' ## Dataset Structure ### Data Instances A typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are 'date', which is the year of publication and 'genre' which classifies the text into one of six broad areas. ### Data Fields The dataset contains the following fields: - 'id': Unique identifier("string"), - 'text': ext in the document("string"), - 'date': Date of publication("date64"), - 'genre': Broad classification("string"), - 'head': Often same as title. Can be missing("string"), - 'title': Title of document("string") ### Data Splits Train: 120 ## Dataset Creation ### Curation Rationale The period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria. ### Source Data #### Initial Data Collection and Normalization The original data is selected according to the following criteria: - Complete texts only, including dedications, prefaces, postscripts, etc. - Texts are of varying length, ranging from c. 3,000 to c. 20,000 words. - Each author appears only once to avoid idiosyncratic language use. - Major literary figures of the time were excluded since their writing style can be studied in other, existing collections. - Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language. #### Who are the source language producers? Authors of texts between 1640-1740 ### Annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset N/A ### Discussion of Biases The social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset ### Other Known Limitations None ## Additional Information ### Dataset Curators Josef Schmied, Claudia Claridge, Rainer Siemund ### Licensing Information Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) University of Oxford, The Lampeter Corpus of Early Modern English Tracts, Oxford Text Archive, URL
[ "# Dataset Card for lampeter_corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Josef Schmied, Claudia Claridge, Rainer Siemund", "### Dataset Summary\n\nThe Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740, \u0013 a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous", "### Supported Tasks and Leaderboards\n\n- 'text-classification': This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification.", "### Languages\n\nThe text in the dataset is British English. The associated BCP-47 code is 'en-GB'", "## Dataset Structure", "### Data Instances\n\nA typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are 'date', which is the year of publication and 'genre' which classifies the text into one of six broad areas.", "### Data Fields\n\nThe dataset contains the following fields: \n\n- 'id': Unique identifier(\"string\"),\n- 'text': ext in the document(\"string\"),\n- 'date': Date of publication(\"date64\"),\n- 'genre': Broad classification(\"string\"),\n- 'head': Often same as title. Can be missing(\"string\"),\n- 'title': Title of document(\"string\")", "### Data Splits\n\nTrain: 120", "## Dataset Creation", "### Curation Rationale\n\nThe period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original data is selected according to the following criteria:\n- Complete texts only, including dedications, prefaces, postscripts, etc.\n- Texts are of varying length, ranging from c. 3,000 to c. 20,000 words.\n- Each author appears only once to avoid idiosyncratic language use.\n- Major literary figures of the time were excluded since their writing style can be studied in other, existing collections.\n- Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language.", "#### Who are the source language producers?\n\nAuthors of texts between 1640-1740", "### Annotations", "#### Annotation process\n\nN/A", "#### Who are the annotators?\n\nN/A", "### Personal and Sensitive Information\n\nN/A", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nN/A", "### Discussion of Biases\n\nThe social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset", "### Other Known Limitations\n\nNone", "## Additional Information", "### Dataset Curators\n\nJosef Schmied, Claudia Claridge, Rainer Siemund", "### Licensing Information\n\nCreative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)\n\n\n\nUniversity of Oxford, The Lampeter Corpus of Early Modern English Tracts, Oxford Text Archive, URL" ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #task_ids-multi-class-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for lampeter_corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Josef Schmied, Claudia Claridge, Rainer Siemund", "### Dataset Summary\n\nThe Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740, \u0013 a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous", "### Supported Tasks and Leaderboards\n\n- 'text-classification': This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification.", "### Languages\n\nThe text in the dataset is British English. The associated BCP-47 code is 'en-GB'", "## Dataset Structure", "### Data Instances\n\nA typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are 'date', which is the year of publication and 'genre' which classifies the text into one of six broad areas.", "### Data Fields\n\nThe dataset contains the following fields: \n\n- 'id': Unique identifier(\"string\"),\n- 'text': ext in the document(\"string\"),\n- 'date': Date of publication(\"date64\"),\n- 'genre': Broad classification(\"string\"),\n- 'head': Often same as title. Can be missing(\"string\"),\n- 'title': Title of document(\"string\")", "### Data Splits\n\nTrain: 120", "## Dataset Creation", "### Curation Rationale\n\nThe period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original data is selected according to the following criteria:\n- Complete texts only, including dedications, prefaces, postscripts, etc.\n- Texts are of varying length, ranging from c. 3,000 to c. 20,000 words.\n- Each author appears only once to avoid idiosyncratic language use.\n- Major literary figures of the time were excluded since their writing style can be studied in other, existing collections.\n- Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language.", "#### Who are the source language producers?\n\nAuthors of texts between 1640-1740", "### Annotations", "#### Annotation process\n\nN/A", "#### Who are the annotators?\n\nN/A", "### Personal and Sensitive Information\n\nN/A", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nN/A", "### Discussion of Biases\n\nThe social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset", "### Other Known Limitations\n\nNone", "## Additional Information", "### Dataset Curators\n\nJosef Schmied, Claudia Claridge, Rainer Siemund", "### Licensing Information\n\nCreative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)\n\n\n\nUniversity of Oxford, The Lampeter Corpus of Early Modern English Tracts, Oxford Text Archive, URL" ]
[ 103, 10, 112, 38, 102, 45, 26, 6, 76, 100, 8, 5, 135, 4, 147, 19, 5, 8, 12, 11, 8, 10, 35, 9, 5, 19, 42 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #task_ids-multi-class-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-3.0 #region-us \n# Dataset Card for lampeter_corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Josef Schmied, Claudia Claridge, Rainer Siemund### Dataset Summary\n\nThe Lampeter Corpus of Early Modern English Tracts is a collection of texts on various subject matter published between 1640 and 1740, \u0013 a time that was marked by the rise of mass publication, the development of public discourse in many areas of everyday life and last but not least, the standardisation of British English. Each text belongs to one of the following genres: Law, Economy, Religion, Politics, Science, Miscellaneous### Supported Tasks and Leaderboards\n\n- 'text-classification': This dataset comes with dates and genre classifications for each text which can be used to finetune a model for text classification.### Languages\n\nThe text in the dataset is British English. The associated BCP-47 code is 'en-GB'## Dataset Structure", "passage: ### Data Instances\n\nA typical data point contains an id, a text, the head of the text (which can be missing on some occasions) and the title. The two features which can be used for classification are 'date', which is the year of publication and 'genre' which classifies the text into one of six broad areas.### Data Fields\n\nThe dataset contains the following fields: \n\n- 'id': Unique identifier(\"string\"),\n- 'text': ext in the document(\"string\"),\n- 'date': Date of publication(\"date64\"),\n- 'genre': Broad classification(\"string\"),\n- 'head': Often same as title. Can be missing(\"string\"),\n- 'title': Title of document(\"string\")### Data Splits\n\nTrain: 120## Dataset Creation### Curation Rationale\n\nThe period covered by the Lampeter Corpus, 1640 to 1740, marks a crucial period in English history and the elaboration of English as a multi-purpose language. The texts selected for the corpus reflect the standardisation process of English and historical developments between the outbreak of the Civil War and the beginning of the Industrial Revolution. To meet the needs of linguists and historians alike, the Lampeter project has attempted to create a balanced corpus rather than a randomly chosen archive or collection. A balanced corpus, then, is characterised by several transparent sampling criteria.### Source Data#### Initial Data Collection and Normalization\n\nThe original data is selected according to the following criteria:\n- Complete texts only, including dedications, prefaces, postscripts, etc.\n- Texts are of varying length, ranging from c. 3,000 to c. 20,000 words.\n- Each author appears only once to avoid idiosyncratic language use.\n- Major literary figures of the time were excluded since their writing style can be studied in other, existing collections.\n- Generally, only first editions of the texts; later editions only if changes were made by the original authors, thus ensuring the authenticity of the language.#### Who are the source language producers?\n\nAuthors of texts between 1640-1740### Annotations#### Annotation process\n\nN/A#### Who are the annotators?\n\nN/A### Personal and Sensitive Information\n\nN/A## Considerations for Using the Data### Social Impact of Dataset\n\nN/A### Discussion of Biases\n\nThe social biases of the time in terms of race, sex, gender, etc. might be encountered in this dataset" ]
017c5c5cada61bfacf5431573b0d054d7a9ce6c6
# Dataset Card for NLLB Multi-Domain ## Table of Contents - [Dataset Card for NLLB Multi-Domain](#dataset-card-for-nllb-multi-domain) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Home:** [Flores](https://github.com/facebookresearch/flores/tree/main/nllb_md) - **Repository:** [Github](https://github.com/facebookresearch/flores/tree/main/nllb_md) ### Dataset Summary NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences. ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this. ### Languages Language | FLORES-200 code ---|--- Central Aymara | ayr_Latn Bhojpuri | bho_Deva Dyula | dyu_Latn Friulian | fur_Latn Russian | rus_Cyrl Wolof | wol_Latn Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-rus_Cyrl" will provide sentences in the format below). ## Dataset Structure ### Data Instances See Dataset Viewer. The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields - `id`: Row number for the data entry, starting at 1. - `sentence`: The full sentence in the specific language (may have _lang for pairings) - `domain`: The domain of the sentence. ### Dataset Creation Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation. ## Additional Information ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information Please cite the authors if you use these corpora in your work: ```bibtex @article{nllb2022, author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang}, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, year = {2022} } ``` Please also cite prior work that this dataset builds on: ```bibtex @inproceedings{, title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation}, author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela}, year={2021} } ``` ```bibtex @inproceedings{, title={Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English}, author={Guzm\'{a}n, Francisco and Chen, Peng-Jen and Ott, Myle and Pino, Juan and Lample, Guillaume and Koehn, Philipp and Chaudhary, Vishrav and Ranzato, Marc'Aurelio}, journal={arXiv preprint arXiv:1902.01382}, year={2019} } ```
breakend/nllb-multi-domain
[ "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|flores", "language:en", "language:ru", "language:ayr", "language:bho", "language:dyu", "language:fur", "language:wol", "license:cc-by-sa-4.0", "arxiv:2207.04672", "region:us" ]
2022-07-18T22:01:53+00:00
{"annotations_creators": ["found"], "language_creators": ["expert-generated"], "language": ["en", "ru", "ayr", "bho", "dyu", "fur", "wol"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|flores"], "task_categories": ["conditional-text-generation"], "task_ids": ["machine-translation"], "paperswithcode_id": "flores", "pretty_name": "nllb-multi-domain"}
2022-08-09T19:44:23+00:00
[ "2207.04672" ]
[ "en", "ru", "ayr", "bho", "dyu", "fur", "wol" ]
TAGS #annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-English #language-Russian #language-Central Aymara #language-Bhojpuri #language-Dyula #language-Friulian #language-Wolof #license-cc-by-sa-4.0 #arxiv-2207.04672 #region-us
Dataset Card for NLLB Multi-Domain ================================== Table of Contents ----------------- * Dataset Card for NLLB Multi-Domain + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation + Additional Information - Dataset Curators - Licensing Information - Citation Information Dataset Description ------------------- * Home: Flores * Repository: Github ### Dataset Summary NLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences. ### Supported Tasks and Leaderboards #### Multilingual Machine Translation Refer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this. ### Languages Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng\_Latn-rus\_Cyrl" will provide sentences in the format below). Dataset Structure ----------------- ### Data Instances See Dataset Viewer. The text is provided as-in the original dataset, without further preprocessing or tokenization. ### Data Fields * 'id': Row number for the data entry, starting at 1. * 'sentence': The full sentence in the specific language (may have \_lang for pairings) * 'domain': The domain of the sentence. ### Dataset Creation Please refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation. Additional Information ---------------------- ### Dataset Curators See paper for details. ### Licensing Information Licensed with Creative Commons Attribution Share Alike 4.0. License available here. Please cite the authors if you use these corpora in your work: Please also cite prior work that this dataset builds on:
[ "### Dataset Summary\n\n\nNLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences.", "### Supported Tasks and Leaderboards", "#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.", "### Languages\n\n\n\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-rus\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nSee Dataset Viewer.\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.", "### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'domain': The domain of the sentence.", "### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nSee paper for details.", "### Licensing Information\n\n\nLicensed with Creative Commons Attribution Share Alike 4.0. License available here.\n\n\nPlease cite the authors if you use these corpora in your work:\n\n\nPlease also cite prior work that this dataset builds on:" ]
[ "TAGS\n#annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-English #language-Russian #language-Central Aymara #language-Bhojpuri #language-Dyula #language-Friulian #language-Wolof #license-cc-by-sa-4.0 #arxiv-2207.04672 #region-us \n", "### Dataset Summary\n\n\nNLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences.", "### Supported Tasks and Leaderboards", "#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.", "### Languages\n\n\n\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-rus\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nSee Dataset Viewer.\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.", "### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'domain': The domain of the sentence.", "### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nSee paper for details.", "### Licensing Information\n\n\nLicensed with Creative Commons Attribution Share Alike 4.0. License available here.\n\n\nPlease cite the authors if you use these corpora in your work:\n\n\nPlease also cite prior work that this dataset builds on:" ]
[ 123, 70, 10, 57, 58, 34, 56, 42, 11, 48 ]
[ "passage: TAGS\n#annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-English #language-Russian #language-Central Aymara #language-Bhojpuri #language-Dyula #language-Friulian #language-Wolof #license-cc-by-sa-4.0 #arxiv-2207.04672 #region-us \n### Dataset Summary\n\n\nNLLB Multi Domain is a set of professionally-translated sentences in News, Unscripted informal speech, and Health domains. It is designed to enable assessment of out-of-domain performance and to study domain adaptation for machine translation. Each domain has approximately 3000 sentences.### Supported Tasks and Leaderboards#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.### Languages\n\n\n\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-rus\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nSee Dataset Viewer.\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'domain': The domain of the sentence.### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nSee paper for details." ]
7cf9edbb26f77e278980a0a7274c9b9cfe736a0a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-large-book-summary * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-e4148a42-11205497
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T22:40:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": ["perplexity"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-18T23:46:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/led-large-book-summary * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ 13, 89, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
ad38a8b3a538f495d14beab585c71d704249645b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-e4148a42-11205498
[ "autotrain", "evaluation", "region:us" ]
2022-07-18T22:40:38+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": ["perplexity"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-18T23:11:46+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ 13, 102, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
7c2e7e455a7a832656bcc0fb0e299e2af85f9778
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-All
[ "region:us" ]
2022-07-19T10:14:23+00:00
{}
2022-09-06T13:45:08+00:00
[]
[]
TAGS #region-us
label_ids: - (0) contradiction - (2) entailment
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
2f101624310c129a6303a1f4f3df70a191357911
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-None
[ "region:us" ]
2022-07-19T10:16:17+00:00
{}
2022-09-06T13:45:55+00:00
[]
[]
TAGS #region-us
label_ids: - (0) contradiction - (2) entailment
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
7b02284135e8ef3867e5fc168f9bbb9cbd355335
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Title
[ "region:us" ]
2022-07-19T10:25:09+00:00
{}
2022-09-06T13:48:25+00:00
[]
[]
TAGS #region-us
label_ids: - (0) contradiction - (2) entailment
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
aa7465b952ba969304d1a6b8f32b7bbb00873fbb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f9efad07-2209-4d77-9230-9fd08f3882ea-41
[ "autotrain", "evaluation", "region:us" ]
2022-07-19T10:46:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-19T13:25:37+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
cde78f02852c8d40801626a4e54802f560797583
# Dataset Card for dummy_tags
albertvillanova/dummy_tags
[ "language:en", "test", "dummy", "region:us" ]
2022-07-19T11:43:29+00:00
{"language": ["en"], "tags": ["test", "dummy"]}
2022-07-19T11:45:12+00:00
[]
[ "en" ]
TAGS #language-English #test #dummy #region-us
# Dataset Card for dummy_tags
[ "# Dataset Card for dummy_tags" ]
[ "TAGS\n#language-English #test #dummy #region-us \n", "# Dataset Card for dummy_tags" ]
[ 15, 9 ]
[ "passage: TAGS\n#language-English #test #dummy #region-us \n# Dataset Card for dummy_tags" ]
f7f915d4676a984516b6dc1a6a898852d81e4b40
this is a test
liyangbing/water
[ "license:afl-3.0", "region:us" ]
2022-07-19T11:51:21+00:00
{"license": "afl-3.0"}
2022-07-19T12:11:13+00:00
[]
[]
TAGS #license-afl-3.0 #region-us
this is a test
[]
[ "TAGS\n#license-afl-3.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-afl-3.0 #region-us \n" ]
6d8a794fba6e00890cdb0dffba4e1cc5edc52664
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Targets
[ "region:us" ]
2022-07-19T11:59:16+00:00
{}
2022-09-06T13:51:18+00:00
[]
[]
TAGS #region-us
label_ids: - (0) contradiction - (2) entailment
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
43610fc18d73da3c4af78813f71b1c3c70c2dc44
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Indicators
[ "region:us" ]
2022-07-19T12:03:15+00:00
{}
2022-09-06T13:43:39+00:00
[]
[]
TAGS #region-us
label_ids: - (0) contradiction - (2) entailment
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
6ca901142522d395700127df0deed52dde59816c
nli-label: - (0) entailment - (2) contradiction
gorkaartola/SC-train-valid-test_SDG-Descriptions
[ "region:us" ]
2022-07-19T12:23:45+00:00
{}
2023-01-18T13:58:15+00:00
[]
[]
TAGS #region-us
nli-label: - (0) entailment - (2) contradiction
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
7aa921ee95641df5965f5589fdfd1a7426296547
# Dataset description This dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs from this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) that were originally code and markdown cells in Jupyter Notebooks. The content of each example the following: ```` [CODE] """ Explanation: [TEXT] End of explanation """ [CODE] """ Explanation: [TEXT] End of explanation """ ... ```` # How to use it ```python from datasets import load_dataset ds = load_dataset("codeparrot/github-jupyter-code-to-text", split="train") ```` ```` Dataset({ features: ['repo_name', 'path', 'license', 'content'], num_rows: 47452 }) ````
codeparrot/github-jupyter-code-to-text
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "code", "region:us" ]
2022-07-19T13:00:45+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "tags": ["code"]}
2023-11-04T23:51:23+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #code #region-us
# Dataset description This dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs from this dataset that were originally code and markdown cells in Jupyter Notebooks. The content of each example the following: ' # How to use it ' '
[ "# Dataset description\nThis dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs \nfrom this dataset that were originally code and markdown cells in Jupyter Notebooks.\n\nThe content of each example the following:\n'", "# How to use it\n'\n'" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #code #region-us \n", "# Dataset description\nThis dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs \nfrom this dataset that were originally code and markdown cells in Jupyter Notebooks.\n\nThe content of each example the following:\n'", "# How to use it\n'\n'" ]
[ 43, 71, 7 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #code #region-us \n# Dataset description\nThis dataset consists of sequences of Python code followed by a a docstring explaining its function. It was constructed by concatenating code and text pairs \nfrom this dataset that were originally code and markdown cells in Jupyter Notebooks.\n\nThe content of each example the following:\n'# How to use it\n'\n'" ]
802411c3010cb00d1b05bad57ca77365a3c699d6
# Dataset Card for CodeContests ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/deepmind/code_contests/ - **Paper:** [Competition-Level Code Generation with AlphaCode](https://arxiv.org/abs/2203.07814v1) - **Leaderboard:** [Code Generation on CodeContests](https://paperswithcode.com/sota/code-generation-on-codecontests) - **Point of Contact:** [David Choi](mailto:[email protected]) ### Dataset Summary CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). It consists of programming problems, from a variety of sources: Site | URL | Source ----------- | --------------------------- | ------ Aizu | https://judge.u-aizu.ac.jp | [CodeNet](https://github.com/IBM/Project_CodeNet) AtCoder | https://atcoder.jp | [CodeNet](https://github.com/IBM/Project_CodeNet) CodeChef | https://www.codechef.com | [description2code](https://github.com/ethancaballero/description2code) Codeforces | https://codeforces.com | [description2code](https://github.com/ethancaballero/description2code) and Codeforces HackerEarth | https://www.hackerearth.com | [description2code](https://github.com/ethancaballero/description2code) Problems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages. ### Supported Tasks and Leaderboards - `translation` - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is "percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available [here](https://paperswithcode.com/sota/code-generation-on-codecontests). ### Languages English. ## Dataset Structure ### Data Instances A data point corresponds to a singular contest problem: ``` { 'name': '76_B. Mice', 'description': 'Modern researches has shown that a flock of hungry mice ' 'searching for a piece of...', 'public_tests': {'input': ['3 2 0 2\n0 1 3\n2 5\n'], 'output': ['1\n']}, 'private_tests': {'input': ['20 18 1 2\n' '-9999944 -9999861 -9999850 -9999763 -9999656 ' '-9999517 -9999375 -999927...', ..., '7 11 10 20\n' '6 18 32 63 66 68 87\n' '6 8 15 23 25 41 53 59 60 75 90\n'], 'output': ['2\n', ..., '1\n']}, 'generated_tests': {'input': ['7 11 10 5\n' '6 18 32 63 66 68 87\n' '6 8 15 23 25 41 53 59 60 75 90\n', ..., '7 11 10 4\n' '6 18 46 63 85 84 87\n' '6 8 15 18 25 41 53 59 60 75 90\n'], 'output': ['1\n', ..., '2\n']}, 'source': 2, 'difficulty': 8, 'solutions': {'language': [2, ..., 2], 'solution': ['#include <bits/stdc++.h>\n' 'using namespace std;\n' 'int n, m;\n' 'int data[2][100010], t[1...', ..., '#include <bits/stdc++.h>\n' 'using namespace std;\n' 'int n, m, pos[100100], food[100100...']}, 'incorrect_solutions': {'language': [2, ..., 2], 'solution': ['#include <bits/stdc++.h>\n' 'using namespace std;\n' 'vector<pair<int, int> > v[100010];...', ..., '#include <bits/stdc++.h>\n' 'using namespace std;\n' 'vector<pair<int, int> > v[100010];...']}, 'cf_contest_id': 76, 'cf_index': 'B', 'cf_points': 0.0, 'cf_rating': 2100, 'cf_tags': ['greedy', 'two pointers'], 'is_description_translated': False, 'untranslated_description': '', 'time_limit': {'seconds': 0, 'nanos': 500000000}, 'memory_limit_bytes': 256000000, 'input_file': '', 'output_file': '' } ``` ### Data Fields - `name`: The name of the contest. Note that names could agree between different sources. - `description`: A natural language description of a programming problem. - `public_tests`: Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired `input` and `output` that can be used to test potential solutions. They are therefore acceptable inputs to a model. - `private_tests`: Private tests are not visible before submitting a solution, so should not be made available as inputs to a model. - `generated_tests`: Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions. - `source`: The original source of the problem, with possible values including `UNKNOWN_SOURCE` (0),`CODECHEF` (1), `CODEFORCES` (2), `HACKEREARTH` (3), `CODEJAM` (4), `ATCODER` (5) and `AIZU` (6). - `difficulty`: A representation of the difficulty of the problem with possible values including `UNKNOWN_DIFFICULTY` (0), `EASY` (1), `MEDIUM` (2), `HARD` (3), `HARDER` (4), `HARDEST` (5), `EXTERNAL` (6), `A` (7), `B` (8), `C` (9), `D` (10), `E` (11), `F` (12), `G` (13), `H` (14), `I` (15), `J` (16), `K` (17), `L` (18), `M` (19), `N` (20), `O` (21), `P` (22), `Q` (23), `R` (24), `S` (25), `T` (26), `U` (27) and `V` (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, `cf_rating` is a more reliable measure of difficulty when available. - `solutions`: Correct solutions to the problem. Contrast with `incorrect_solutions` below. - `incorrect_solutions`: Incorrect solutions. - `cf_contest_id`: The Contest ID. Note that Contest ID is not monotonic with respect to time. - `cf_index`: Problem index, e.g. `"A"` or `"B"` or `"C"`. - `cf_points`: Points for the problem, e.g. `1000.0` - `cf_rating`: Problem rating (difficulty), e.g. `1100` - `cf_tags`: Problem tags, e.g. `['greedy', 'math']` - `is_description_translated`: Whether the problem was translated to English. - `untranslated_description`: The untranslated description is only available for translated problems. - `time_limit`: The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, `seconds` and `nanos`. This field is None if not defined. - `memory_limit_bytes`: The memory limit constraint to use when executing solutions. - `input_file`: Most problems use stdin for IO. Some problems expect specific files to be used instead. - `output_file`: Most problems use stdout for IO. Some problems expect specific files to be used instead. All tests are represented as a paired `input` and `output` that can be used to test potential solutions and all solutions comprise a `language`, with possible values including `UNKNOWN_LANGUAGE` (0), `PYTHON` (1) (solutions written in PYTHON2), `CPP` (2), `PYTHON3` (3) and `JAVA` (4), and a `solution` string written in that `language`. The fields preceded with `cf_` denote extra meta-data for Codeforces problems. ### Data Splits The data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples. ## Dataset Creation ### Curation Rationale This dataset was created for fine-tuning AlphaCode models: > Models pre-trained on GitHub can generate good code and solve simple programming problems, but as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning the model on a dedicated competitive programming dataset is critical for performance. ### Source Data #### Initial Data Collection and Normalization The information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper. #### Who are the source language producers? The problems are scraped from the following platforms: [Aizu](https://judge.u-aizu.ac.jp), [AtCoder](https://atcoder.jp ), [CodeChef](https://www.codechef.com), [Codeforces](https://codeforces.com) and [HackerEarch](https://www.hackerearth.com). Additionally, some data from the existing public competitive programming dataset Description2Code ([Caballero et al., 2016](https://github.com/ethancaballero/description2code)) and CodeNet ([(Puri et al., 2021](https://arxiv.org/pdf/2105.12655.pdf)) is mixed into the training set. ### Annotations #### Annotation process The solutions are scapred alongside the problem descriptions. #### Who are the annotators? Same as the source data creators. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals. ### Licensing Information This dataset is made available under the terms of the CC BY 4.0 license ([Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/legalcode)). Additional acknowledged contributions: * Codeforces materials are sourced from http://codeforces.com. * Description2Code materials are sourced from: [Description2Code Dataset](https://github.com/ethancaballero/description2code), licensed under the [MIT open source license](https://opensource.org/licenses/MIT), copyright not specified. * CodeNet materials are sourced from: [Project_CodeNet](https://github.com/IBM/Project_CodeNet), licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), copyright not specified. ### Citation Information ```bibtex @article{li2022competition, title={Competition-Level Code Generation with AlphaCode}, author={Li, Yujia and Choi, David and Chung, Junyoung and Kushman, Nate and Schrittwieser, Julian and Leblond, R{\'e}mi and Eccles, Tom and Keeling, James and Gimeno, Felix and Dal Lago, Agustin and Hubert, Thomas and Choy, Peter and de Masson d'Autume, Cyprien and Babuschkin, Igor and Chen, Xinyun and Huang, Po-Sen and Welbl, Johannes and Gowal, Sven and Cherepanov, Alexey and Molloy, James and Mankowitz, Daniel and Sutherland Robson, Esme and Kohli, Pushmeet and de Freitas, Nando and Kavukcuoglu, Koray and Vinyals, Oriol}, journal={arXiv preprint arXiv:2203.07814}, year={2022} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
deepmind/code_contests
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2203.07814", "arxiv:2105.12655", "region:us" ]
2022-07-19T15:02:55+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "codecontests", "pretty_name": "CodeContests"}
2023-06-11T11:22:30+00:00
[ "2203.07814", "2105.12655" ]
[ "en" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2203.07814 #arxiv-2105.12655 #region-us
Dataset Card for CodeContests ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: Competition-Level Code Generation with AlphaCode * Leaderboard: Code Generation on CodeContests * Point of Contact: David Choi ### Dataset Summary CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training AlphaCode. It consists of programming problems, from a variety of sources: Site: Aizu, URL: URL, Source: CodeNet Site: AtCoder, URL: URL, Source: CodeNet Site: CodeChef, URL: URL, Source: description2code Site: Codeforces, URL: URL, Source: description2code and Codeforces Site: HackerEarth, URL: URL, Source: description2code Problems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages. ### Supported Tasks and Leaderboards * 'translation' - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is "percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available here. ### Languages English. Dataset Structure ----------------- ### Data Instances A data point corresponds to a singular contest problem: ### Data Fields * 'name': The name of the contest. Note that names could agree between different sources. * 'description': A natural language description of a programming problem. * 'public\_tests': Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired 'input' and 'output' that can be used to test potential solutions. They are therefore acceptable inputs to a model. * 'private\_tests': Private tests are not visible before submitting a solution, so should not be made available as inputs to a model. * 'generated\_tests': Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions. * 'source': The original source of the problem, with possible values including 'UNKNOWN\_SOURCE' (0),'CODECHEF' (1), 'CODEFORCES' (2), 'HACKEREARTH' (3), 'CODEJAM' (4), 'ATCODER' (5) and 'AIZU' (6). * 'difficulty': A representation of the difficulty of the problem with possible values including 'UNKNOWN\_DIFFICULTY' (0), 'EASY' (1), 'MEDIUM' (2), 'HARD' (3), 'HARDER' (4), 'HARDEST' (5), 'EXTERNAL' (6), 'A' (7), 'B' (8), 'C' (9), 'D' (10), 'E' (11), 'F' (12), 'G' (13), 'H' (14), 'I' (15), 'J' (16), 'K' (17), 'L' (18), 'M' (19), 'N' (20), 'O' (21), 'P' (22), 'Q' (23), 'R' (24), 'S' (25), 'T' (26), 'U' (27) and 'V' (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, 'cf\_rating' is a more reliable measure of difficulty when available. * 'solutions': Correct solutions to the problem. Contrast with 'incorrect\_solutions' below. * 'incorrect\_solutions': Incorrect solutions. * 'cf\_contest\_id': The Contest ID. Note that Contest ID is not monotonic with respect to time. * 'cf\_index': Problem index, e.g. '"A"' or '"B"' or '"C"'. * 'cf\_points': Points for the problem, e.g. '1000.0' * 'cf\_rating': Problem rating (difficulty), e.g. '1100' * 'cf\_tags': Problem tags, e.g. '['greedy', 'math']' * 'is\_description\_translated': Whether the problem was translated to English. * 'untranslated\_description': The untranslated description is only available for translated problems. * 'time\_limit': The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, 'seconds' and 'nanos'. This field is None if not defined. * 'memory\_limit\_bytes': The memory limit constraint to use when executing solutions. * 'input\_file': Most problems use stdin for IO. Some problems expect specific files to be used instead. * 'output\_file': Most problems use stdout for IO. Some problems expect specific files to be used instead. All tests are represented as a paired 'input' and 'output' that can be used to test potential solutions and all solutions comprise a 'language', with possible values including 'UNKNOWN\_LANGUAGE' (0), 'PYTHON' (1) (solutions written in PYTHON2), 'CPP' (2), 'PYTHON3' (3) and 'JAVA' (4), and a 'solution' string written in that 'language'. The fields preceded with 'cf\_' denote extra meta-data for Codeforces problems. ### Data Splits The data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples. Dataset Creation ---------------- ### Curation Rationale This dataset was created for fine-tuning AlphaCode models: > > Models pre-trained on GitHub can generate good code and solve simple programming problems, but > as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning > the model on a dedicated competitive programming dataset is critical for performance. > > > ### Source Data #### Initial Data Collection and Normalization The information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper. #### Who are the source language producers? The problems are scraped from the following platforms: Aizu, AtCoder, CodeChef, Codeforces and HackerEarch. Additionally, some data from the existing public competitive programming dataset Description2Code (Caballero et al., 2016) and CodeNet ((Puri et al., 2021) is mixed into the training set. ### Annotations #### Annotation process The solutions are scapred alongside the problem descriptions. #### Who are the annotators? Same as the source data creators. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals. ### Licensing Information This dataset is made available under the terms of the CC BY 4.0 license (Creative Commons Attribution 4.0 International license). Additional acknowledged contributions: * Codeforces materials are sourced from URL. * Description2Code materials are sourced from: Description2Code Dataset, licensed under the MIT open source license, copyright not specified. * CodeNet materials are sourced from: Project\_CodeNet, licensed under Apache 2.0, copyright not specified. ### Contributions Thanks to @mariosasko for adding this dataset.
[ "### Dataset Summary\n\n\nCodeContests is a competitive programming dataset for machine-learning. This\ndataset was used when training AlphaCode.\n\n\nIt consists of programming problems, from a variety of sources:\n\n\nSite: Aizu, URL: URL, Source: CodeNet\nSite: AtCoder, URL: URL, Source: CodeNet\nSite: CodeChef, URL: URL, Source: description2code\nSite: Codeforces, URL: URL, Source: description2code and Codeforces\nSite: HackerEarth, URL: URL, Source: description2code\n\n\nProblems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages.", "### Supported Tasks and Leaderboards\n\n\n* 'translation' - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is \"percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem\", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available here.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point corresponds to a singular contest problem:", "### Data Fields\n\n\n* 'name': The name of the contest. Note that names could agree between different sources.\n* 'description': A natural language description of a programming problem.\n* 'public\\_tests': Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired 'input' and 'output' that can be used to test potential solutions. They are therefore acceptable inputs to a model.\n* 'private\\_tests': Private tests are not visible before submitting a solution, so should not be made available as inputs to a model.\n* 'generated\\_tests': Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions.\n* 'source': The original source of the problem, with possible values including 'UNKNOWN\\_SOURCE' (0),'CODECHEF' (1), 'CODEFORCES' (2), 'HACKEREARTH' (3), 'CODEJAM' (4), 'ATCODER' (5) and 'AIZU' (6).\n* 'difficulty': A representation of the difficulty of the problem with possible values including 'UNKNOWN\\_DIFFICULTY' (0), 'EASY' (1), 'MEDIUM' (2), 'HARD' (3), 'HARDER' (4), 'HARDEST' (5), 'EXTERNAL' (6), 'A' (7), 'B' (8), 'C' (9), 'D' (10), 'E' (11), 'F' (12), 'G' (13), 'H' (14), 'I' (15), 'J' (16), 'K' (17), 'L' (18), 'M' (19), 'N' (20), 'O' (21), 'P' (22), 'Q' (23), 'R' (24), 'S' (25), 'T' (26), 'U' (27) and 'V' (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, 'cf\\_rating' is a more reliable measure of difficulty when available.\n* 'solutions': Correct solutions to the problem. Contrast with 'incorrect\\_solutions' below.\n* 'incorrect\\_solutions': Incorrect solutions.\n* 'cf\\_contest\\_id': The Contest ID. Note that Contest ID is not monotonic with respect to time.\n* 'cf\\_index': Problem index, e.g. '\"A\"' or '\"B\"' or '\"C\"'.\n* 'cf\\_points': Points for the problem, e.g. '1000.0'\n* 'cf\\_rating': Problem rating (difficulty), e.g. '1100'\n* 'cf\\_tags': Problem tags, e.g. '['greedy', 'math']'\n* 'is\\_description\\_translated': Whether the problem was translated to English.\n* 'untranslated\\_description': The untranslated description is only available for translated problems.\n* 'time\\_limit': The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, 'seconds' and 'nanos'. This field is None if not defined.\n* 'memory\\_limit\\_bytes': The memory limit constraint to use when executing solutions.\n* 'input\\_file': Most problems use stdin for IO. Some problems expect specific files to be used instead.\n* 'output\\_file': Most problems use stdout for IO. Some problems expect specific files to be used instead.\n\n\nAll tests are represented as a paired 'input' and 'output' that can be used to test potential solutions and all solutions comprise a 'language', with possible values including 'UNKNOWN\\_LANGUAGE' (0), 'PYTHON' (1) (solutions written in PYTHON2), 'CPP' (2), 'PYTHON3' (3) and 'JAVA' (4), and a 'solution' string written in that 'language'. The fields preceded with 'cf\\_' denote extra meta-data for Codeforces problems.", "### Data Splits\n\n\nThe data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset was created for fine-tuning AlphaCode models:\n\n\n\n> \n> Models pre-trained on GitHub can generate good code and solve simple programming problems, but\n> as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning\n> the model on a dedicated competitive programming dataset is critical for performance.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper.", "#### Who are the source language producers?\n\n\nThe problems are scraped from the following platforms: Aizu, AtCoder, CodeChef, Codeforces and HackerEarch. Additionally, some data from the existing public competitive programming dataset Description2Code (Caballero et al., 2016) and CodeNet ((Puri et al., 2021) is mixed into the training set.", "### Annotations", "#### Annotation process\n\n\nThe solutions are scapred alongside the problem descriptions.", "#### Who are the annotators?\n\n\nSame as the source data creators.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nYujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals.", "### Licensing Information\n\n\nThis dataset is made available under the terms of the CC BY\n4.0 license (Creative Commons Attribution 4.0 International license).\n\n\nAdditional acknowledged contributions:\n\n\n* Codeforces materials are sourced from URL.\n* Description2Code materials are sourced from:\nDescription2Code Dataset,\nlicensed under the\nMIT open source license, copyright\nnot specified.\n* CodeNet materials are sourced from:\nProject\\_CodeNet, licensed under\nApache 2.0, copyright not\nspecified.", "### Contributions\n\n\nThanks to @mariosasko for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2203.07814 #arxiv-2105.12655 #region-us \n", "### Dataset Summary\n\n\nCodeContests is a competitive programming dataset for machine-learning. This\ndataset was used when training AlphaCode.\n\n\nIt consists of programming problems, from a variety of sources:\n\n\nSite: Aizu, URL: URL, Source: CodeNet\nSite: AtCoder, URL: URL, Source: CodeNet\nSite: CodeChef, URL: URL, Source: description2code\nSite: Codeforces, URL: URL, Source: description2code and Codeforces\nSite: HackerEarth, URL: URL, Source: description2code\n\n\nProblems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages.", "### Supported Tasks and Leaderboards\n\n\n* 'translation' - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is \"percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem\", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available here.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point corresponds to a singular contest problem:", "### Data Fields\n\n\n* 'name': The name of the contest. Note that names could agree between different sources.\n* 'description': A natural language description of a programming problem.\n* 'public\\_tests': Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired 'input' and 'output' that can be used to test potential solutions. They are therefore acceptable inputs to a model.\n* 'private\\_tests': Private tests are not visible before submitting a solution, so should not be made available as inputs to a model.\n* 'generated\\_tests': Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions.\n* 'source': The original source of the problem, with possible values including 'UNKNOWN\\_SOURCE' (0),'CODECHEF' (1), 'CODEFORCES' (2), 'HACKEREARTH' (3), 'CODEJAM' (4), 'ATCODER' (5) and 'AIZU' (6).\n* 'difficulty': A representation of the difficulty of the problem with possible values including 'UNKNOWN\\_DIFFICULTY' (0), 'EASY' (1), 'MEDIUM' (2), 'HARD' (3), 'HARDER' (4), 'HARDEST' (5), 'EXTERNAL' (6), 'A' (7), 'B' (8), 'C' (9), 'D' (10), 'E' (11), 'F' (12), 'G' (13), 'H' (14), 'I' (15), 'J' (16), 'K' (17), 'L' (18), 'M' (19), 'N' (20), 'O' (21), 'P' (22), 'Q' (23), 'R' (24), 'S' (25), 'T' (26), 'U' (27) and 'V' (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, 'cf\\_rating' is a more reliable measure of difficulty when available.\n* 'solutions': Correct solutions to the problem. Contrast with 'incorrect\\_solutions' below.\n* 'incorrect\\_solutions': Incorrect solutions.\n* 'cf\\_contest\\_id': The Contest ID. Note that Contest ID is not monotonic with respect to time.\n* 'cf\\_index': Problem index, e.g. '\"A\"' or '\"B\"' or '\"C\"'.\n* 'cf\\_points': Points for the problem, e.g. '1000.0'\n* 'cf\\_rating': Problem rating (difficulty), e.g. '1100'\n* 'cf\\_tags': Problem tags, e.g. '['greedy', 'math']'\n* 'is\\_description\\_translated': Whether the problem was translated to English.\n* 'untranslated\\_description': The untranslated description is only available for translated problems.\n* 'time\\_limit': The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, 'seconds' and 'nanos'. This field is None if not defined.\n* 'memory\\_limit\\_bytes': The memory limit constraint to use when executing solutions.\n* 'input\\_file': Most problems use stdin for IO. Some problems expect specific files to be used instead.\n* 'output\\_file': Most problems use stdout for IO. Some problems expect specific files to be used instead.\n\n\nAll tests are represented as a paired 'input' and 'output' that can be used to test potential solutions and all solutions comprise a 'language', with possible values including 'UNKNOWN\\_LANGUAGE' (0), 'PYTHON' (1) (solutions written in PYTHON2), 'CPP' (2), 'PYTHON3' (3) and 'JAVA' (4), and a 'solution' string written in that 'language'. The fields preceded with 'cf\\_' denote extra meta-data for Codeforces problems.", "### Data Splits\n\n\nThe data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset was created for fine-tuning AlphaCode models:\n\n\n\n> \n> Models pre-trained on GitHub can generate good code and solve simple programming problems, but\n> as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning\n> the model on a dedicated competitive programming dataset is critical for performance.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper.", "#### Who are the source language producers?\n\n\nThe problems are scraped from the following platforms: Aizu, AtCoder, CodeChef, Codeforces and HackerEarch. Additionally, some data from the existing public competitive programming dataset Description2Code (Caballero et al., 2016) and CodeNet ((Puri et al., 2021) is mixed into the training set.", "### Annotations", "#### Annotation process\n\n\nThe solutions are scapred alongside the problem descriptions.", "#### Who are the annotators?\n\n\nSame as the source data creators.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nYujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals.", "### Licensing Information\n\n\nThis dataset is made available under the terms of the CC BY\n4.0 license (Creative Commons Attribution 4.0 International license).\n\n\nAdditional acknowledged contributions:\n\n\n* Codeforces materials are sourced from URL.\n* Description2Code materials are sourced from:\nDescription2Code Dataset,\nlicensed under the\nMIT open source license, copyright\nnot specified.\n* CodeNet materials are sourced from:\nProject\\_CodeNet, licensed under\nApache 2.0, copyright not\nspecified.", "### Contributions\n\n\nThanks to @mariosasko for adding this dataset." ]
[ 92, 155, 131, 13, 17, 959, 49, 88, 4, 39, 88, 5, 18, 17, 18, 7, 8, 14, 143, 105, 17 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2203.07814 #arxiv-2105.12655 #region-us \n### Dataset Summary\n\n\nCodeContests is a competitive programming dataset for machine-learning. This\ndataset was used when training AlphaCode.\n\n\nIt consists of programming problems, from a variety of sources:\n\n\nSite: Aizu, URL: URL, Source: CodeNet\nSite: AtCoder, URL: URL, Source: CodeNet\nSite: CodeChef, URL: URL, Source: description2code\nSite: Codeforces, URL: URL, Source: description2code and Codeforces\nSite: HackerEarth, URL: URL, Source: description2code\n\n\nProblems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages.### Supported Tasks and Leaderboards\n\n\n* 'translation' - the competitive programming code generation problem can be viewed as a sequence-to-sequence translation task: given a problem description 𝑋 in natural language, produce a corresponding solution 𝑌 in a programming language. The metric used for evaluation is \"percentage of problems solved using 𝑛 submissions from 𝑘 samples per problem\", denoted as 𝑛@𝑘. More information on the evaluation of AlphaCode can be found in Section 2.2. and Appendix A.3. of the paper. The leaderboard for this task is available here.### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA data point corresponds to a singular contest problem:", "passage: ### Data Fields\n\n\n* 'name': The name of the contest. Note that names could agree between different sources.\n* 'description': A natural language description of a programming problem.\n* 'public\\_tests': Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired 'input' and 'output' that can be used to test potential solutions. They are therefore acceptable inputs to a model.\n* 'private\\_tests': Private tests are not visible before submitting a solution, so should not be made available as inputs to a model.\n* 'generated\\_tests': Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions.\n* 'source': The original source of the problem, with possible values including 'UNKNOWN\\_SOURCE' (0),'CODECHEF' (1), 'CODEFORCES' (2), 'HACKEREARTH' (3), 'CODEJAM' (4), 'ATCODER' (5) and 'AIZU' (6).\n* 'difficulty': A representation of the difficulty of the problem with possible values including 'UNKNOWN\\_DIFFICULTY' (0), 'EASY' (1), 'MEDIUM' (2), 'HARD' (3), 'HARDER' (4), 'HARDEST' (5), 'EXTERNAL' (6), 'A' (7), 'B' (8), 'C' (9), 'D' (10), 'E' (11), 'F' (12), 'G' (13), 'H' (14), 'I' (15), 'J' (16), 'K' (17), 'L' (18), 'M' (19), 'N' (20), 'O' (21), 'P' (22), 'Q' (23), 'R' (24), 'S' (25), 'T' (26), 'U' (27) and 'V' (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, 'cf\\_rating' is a more reliable measure of difficulty when available.\n* 'solutions': Correct solutions to the problem. Contrast with 'incorrect\\_solutions' below.\n* 'incorrect\\_solutions': Incorrect solutions.\n* 'cf\\_contest\\_id': The Contest ID. Note that Contest ID is not monotonic with respect to time.\n* 'cf\\_index': Problem index, e.g. '\"A\"' or '\"B\"' or '\"C\"'.\n* 'cf\\_points': Points for the problem, e.g. '1000.0'\n* 'cf\\_rating': Problem rating (difficulty), e.g. '1100'\n* 'cf\\_tags': Problem tags, e.g. '['greedy', 'math']'\n* 'is\\_description\\_translated': Whether the problem was translated to English.\n* 'untranslated\\_description': The untranslated description is only available for translated problems.\n* 'time\\_limit': The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, 'seconds' and 'nanos'. This field is None if not defined.\n* 'memory\\_limit\\_bytes': The memory limit constraint to use when executing solutions.\n* 'input\\_file': Most problems use stdin for IO. Some problems expect specific files to be used instead.\n* 'output\\_file': Most problems use stdout for IO. Some problems expect specific files to be used instead.\n\n\nAll tests are represented as a paired 'input' and 'output' that can be used to test potential solutions and all solutions comprise a 'language', with possible values including 'UNKNOWN\\_LANGUAGE' (0), 'PYTHON' (1) (solutions written in PYTHON2), 'CPP' (2), 'PYTHON3' (3) and 'JAVA' (4), and a 'solution' string written in that 'language'. The fields preceded with 'cf\\_' denote extra meta-data for Codeforces problems.### Data Splits\n\n\nThe data is split into training, validation and test set. The training set contains 13328 samples, the validation set 117 samples and the test set 165 samples.\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThis dataset was created for fine-tuning AlphaCode models:\n\n\n\n> \n> Models pre-trained on GitHub can generate good code and solve simple programming problems, but\n> as shown in Appendix B.3 they can solve very few competitive programming problems. Fine-tuning\n> the model on a dedicated competitive programming dataset is critical for performance.\n> \n> \n>### Source Data#### Initial Data Collection and Normalization\n\n\nThe information on the data collection and normalization procedures can found in Section 3.2. and Appendinx B.2. of the paper.#### Who are the source language producers?\n\n\nThe problems are scraped from the following platforms: Aizu, AtCoder, CodeChef, Codeforces and HackerEarch. Additionally, some data from the existing public competitive programming dataset Description2Code (Caballero et al., 2016) and CodeNet ((Puri et al., 2021) is mixed into the training set.### Annotations#### Annotation process\n\n\nThe solutions are scapred alongside the problem descriptions.#### Who are the annotators?\n\n\nSame as the source data creators.### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nYujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu and Oriol Vinyals." ]
7a73e5c5d9569f29a92fc65be56c3908ec280419
# Dataset Card for "relbert/conceptnet_high_confidence" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html) - **Dataset:** High Confidence Subset of ConceptNet ### Dataset Summary The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html), which compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { "relation_type": "AtLocation", "positives": [["fish", "water"], ["cloud", "sky"], ["child", "school"], ... ], "negatives": [["pen", "write"], ["sex", "fun"], ["soccer", "sport"], ["fish", "school"], ... ] } ``` ### Data Splits | name |train|validation| |---------|----:|---------:| |conceptnet_high_confidence| 25 | 24| ### Number of Positive/Negative Word-pairs in each Split | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) | |:-----------------|-------------------:|-------------------:|------------------------:|------------------------:| | AtLocation | 383 | 1768 | 97 | 578 | | CapableOf | 195 | 1790 | 73 | 600 | | Causes | 71 | 1797 | 26 | 595 | | CausesDesire | 9 | 1793 | 11 | 595 | | CreatedBy | 2 | 1796 | 0 | 0 | | DefinedAs | 0 | 0 | 2 | 595 | | Desires | 16 | 1794 | 12 | 595 | | HasA | 67 | 1814 | 17 | 595 | | HasFirstSubevent | 2 | 1796 | 0 | 0 | | HasLastSubevent | 2 | 1796 | 3 | 593 | | HasPrerequisite | 168 | 1803 | 57 | 592 | | HasProperty | 94 | 1801 | 39 | 605 | | HasSubevent | 125 | 1798 | 40 | 609 | | IsA | 310 | 1764 | 98 | 603 | | MadeOf | 17 | 1793 | 7 | 593 | | MotivatedByGoal | 14 | 1796 | 11 | 595 | | NotCapableOf | 15 | 1793 | 0 | 0 | | NotDesires | 4 | 1795 | 4 | 592 | | PartOf | 34 | 1801 | 7 | 593 | | ReceivesAction | 18 | 1793 | 8 | 593 | | SymbolOf | 0 | 0 | 2 | 596 | | UsedFor | 249 | 1815 | 81 | 588 | | SUM | 1795 | 35896 | 595 | 11305 | ### Citation Information ``` @InProceedings{P16-1137, author = "Li, Xiang and Taheri, Aynaz and Tu, Lifu and Gimpel, Kevin", title = "Commonsense Knowledge Base Completion", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ", year = "2016", publisher = "Association for Computational Linguistics", pages = "1445--1455", location = "Berlin, Germany", doi = "10.18653/v1/P16-1137", url = "http://aclweb.org/anthology/P16-1137" } ```
research-backup/conceptnet_high_confidence
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2022-07-19T18:26:12+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "pretty_name": "ConceptNet with High Confidence"}
2022-09-20T00:13:24+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us
Dataset Card for "relbert/conceptnet\_high\_confidence" ======================================================= Dataset Description ------------------- * Repository: RelBERT * Paper: URL * Dataset: High Confidence Subset of ConceptNet ### Dataset Summary The selected subset of ConceptNet used in this work, which compiled to fine-tune RelBERT model. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Splits ### Number of Positive/Negative Word-pairs in each Split
[ "### Dataset Summary\n\n\nThe selected subset of ConceptNet used in this work, which compiled\nto fine-tune RelBERT model.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Splits", "### Number of Positive/Negative Word-pairs in each Split" ]
[ "TAGS\n#multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nThe selected subset of ConceptNet used in this work, which compiled\nto fine-tune RelBERT model.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Splits", "### Number of Positive/Negative Word-pairs in each Split" ]
[ 35, 39, 18, 5, 17 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n### Dataset Summary\n\n\nThe selected subset of ConceptNet used in this work, which compiled\nto fine-tune RelBERT model.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Splits### Number of Positive/Negative Word-pairs in each Split" ]
41b8a9a3b3f7aab40340b983c8fd852240cf5fc5
# Dataset Card for "relbert/conceptnet" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://ojs.aaai.org/index.php/AAAI/article/view/11164](https://ojs.aaai.org/index.php/AAAI/article/view/11164) - **Dataset:** ConceptNet5 ### Dataset Summary ConceptNet5, which compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model. ## Dataset Structure ### Data Instances An example of `train` looks as follows. ``` { "relation_type": "AtLocation", "positives": [["fish", "water"], ["cloud", "sky"], ["child", "school"], ... ], "negatives": [["pen", "write"], ["sex", "fun"], ["soccer", "sport"], ["fish", "school"], ... ] } ``` ### Data Splits | name |train|validation| |---------|----:|---------:| |conceptnet| 33 | 25| ### Number of Positive/Negative Word-pairs in each Split | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) | |:-----------------|-------------------:|-------------------:|------------------------:|------------------------:| | Antonym | 3175 | 206870 | 703 | 65330 | | AtLocation | 6974 | 203071 | 727 | 65306 | | CapableOf | 603 | 209442 | 0 | 0 | | Causes | 906 | 209139 | 83 | 65950 | | CausesDesire | 195 | 209850 | 30 | 66003 | | CreatedBy | 104 | 209941 | 4 | 66029 | | DefinedAs | 16 | 210029 | 2 | 66031 | | Desires | 374 | 209671 | 0 | 0 | | DistinctFrom | 1552 | 208493 | 426 | 65607 | | Entails | 277 | 209768 | 118 | 65915 | | HasA | 606 | 209439 | 10 | 66023 | | HasContext | 4664 | 205381 | 1936 | 64097 | | HasFirstSubevent | 66 | 209979 | 17 | 66016 | | HasLastSubevent | 82 | 209963 | 14 | 66019 | | HasPrerequisite | 586 | 209459 | 123 | 65910 | | HasProperty | 1397 | 208648 | 0 | 0 | | HasSubevent | 644 | 209401 | 64 | 65969 | | InstanceOf | 1 | 210044 | 0 | 0 | | IsA | 54028 | 156017 | 21122 | 44911 | | LocatedNear | 21 | 210024 | 3 | 66030 | | MadeOf | 221 | 209824 | 23 | 66010 | | MannerOf | 8762 | 201283 | 3747 | 62286 | | MotivatedByGoal | 282 | 209763 | 35 | 65998 | | NotCapableOf | 17 | 210028 | 0 | 0 | | NotDesires | 235 | 209810 | 0 | 0 | | NotHasProperty | 74 | 209971 | 19 | 66014 | | PartOf | 6880 | 203165 | 2629 | 63404 | | ReceivesAction | 290 | 209755 | 0 | 0 | | RelatedTo | 61672 | 148373 | 11356 | 54677 | | SimilarTo | 82 | 209963 | 36 | 65997 | | SymbolOf | 1 | 210044 | 0 | 0 | | Synonym | 52261 | 157784 | 22391 | 43642 | | UsedFor | 2997 | 207048 | 415 | 65618 | | SUM | 210045 | 6.72144e+06 | 66033 | 1.58479e+06 | ### Citation Information ``` @inproceedings{speer2017conceptnet, title={Conceptnet 5.5: An open multilingual graph of general knowledge}, author={Speer, Robyn and Chin, Joshua and Havasi, Catherine}, booktitle={Thirty-first AAAI conference on artificial intelligence}, year={2017} } ```
research-backup/conceptnet
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2022-07-19T18:27:44+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "pretty_name": "ConceptNet"}
2022-07-26T09:24:35+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us
Dataset Card for "relbert/conceptnet" ===================================== Dataset Description ------------------- * Repository: RelBERT * Paper: URL * Dataset: ConceptNet5 ### Dataset Summary ConceptNet5, which compiled to fine-tune RelBERT model. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Splits ### Number of Positive/Negative Word-pairs in each Split
[ "### Dataset Summary\n\n\nConceptNet5, which compiled to fine-tune RelBERT model.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Splits", "### Number of Positive/Negative Word-pairs in each Split" ]
[ "TAGS\n#multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nConceptNet5, which compiled to fine-tune RelBERT model.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Splits", "### Number of Positive/Negative Word-pairs in each Split" ]
[ 35, 31, 18, 5, 17 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n### Dataset Summary\n\n\nConceptNet5, which compiled to fine-tune RelBERT model.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Splits### Number of Positive/Negative Word-pairs in each Split" ]
c5cd49c2881afa3525bbf9298f503934f3805f5c
# Dataset Card for lancaster_newsbooks ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ota.bodleian.ox.ac.uk/repository/xmlui/handle/20.500.12024/2531 - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** Tony McEnery ### Dataset Summary This corpus consists of two collections of seventeenth-century English "newsbooks". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy. The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. This is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally. For more information about the corpus, see www.ling.lancs.ac.uk/newsbooks ### Supported Tasks and Leaderboards `text-classification`: This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods ### Languages The language in this dataset is English from 1654. The associated BCP-47 code is `en:GB` ## Dataset Structure ### Data Instances ``` { 'id': 'PerfAcc170', 'text': "Another late fight in Scotland, betwixt Col. Morgan and the Highlanders; with the number that were slain and taken Prisoners. The removing of Lieut. Col. John Lilburn from the Tower of London. The readiness of our Fleet for new action, though Peace be agreed on with Holland and Denmark. The taking of several more Prizes at sea. An Order of the Commissioners for the Trial and Approbation of public Preachers. Several proceedings of His Highness the Lord Protector and his Council, and another Ordinance touching the adjourning of the Term. Together with variety of choice Intelligence from several Foreign parts. From Wednesday APRIL 5 TO Wednesday April 12. 1654. Many Addresses were made to his Highness the Lord Protector, in the name of the City and County of York, and other places, wherein they acknowledge the great blessing of God to this Nation, that they have so great, so good and able a Protector. This day the Sessions began in the Old Bailey, and one of those that committed the late Robbery on Black-Heath, being called to his Trial, he refused to plead; but more hereafter. This evening about 9 of the Clock, the Dutch Ambassadors signed and sealed the Ratification of the Articles of Peace so long spoken of; so did likewise the Commissioners appointed to treat with them by his Highness the Lord Protector. Paris April 11, 1654. The Cardinal de Retz being removed from Vincennes by the Marshal de la Mesteray, is now safe arrived at Nantes, and put into the Castle. The Court Emissaries give out that he is not to be long there, but in a few days to be set at liberty, only that his Majesty desireth satisfaction upon some certain points, although the main drift is to make him surrender his place of Archbishop of this City. The Commissioners of Languedoc cannot yet prevail in anything upon their Complaints, but are like the Commissioners of Catalonia, who hitherto have prevailed no further than to receive many fair words, but nothing effectual, the main work now in hand, is to find monies speedily for the setting forth of the Army, that they may be in the field as soon as may be, and to that end the Partisans are not wanting to find out new ways for exacting of monies, preferring large sums to be put into the King's Coffers, the difficulty lieth only in the effecting of it, by reason that the Country is in most places so exhausted of monies, that they are scarce able to live: The design for the King's Coronation is now on foot again, and if I am rightly informed, it will be done about the middle of May next, which being done, his Majesty shall go upon the borders and down to Picardy to forward his Army in their Action, so much the rather, by reason that the Prince of Conde, whom we hear was last week at Valenciennes, and then taking a view of his Army, is returned to Bruxels, there to confer with the Archduke Leopoldus for to obtain money and other necessaries for the march of his Army, that so they may fall to action as soon as the weather and season will give them leave, his Lady and son are still at Rocroy, where they are expecting some alteration to their present condition. The Earl of Harcourt hath not yet received any answer from the Court upon those proposals which he lately sent to the Court. We have news, that the Duke Francis hath at last accepted the command of his Brother the Duke of Lorrain's Army, and is expected there in a few days, which our Cardinal doth very well relish. The forces that were in the Country of Liege are now marching homewards, and are to be quartered in Lorrain. The great preparation for an Armado to go from Marseilles and Touloon, is much at a stand, only there are lately 5 men of War gone to Sea, and 3 more are to follow, but upon no design than to rob and plunder upon the sea, sparing scarce any they encounter, whether they be friends or foes. This day his Highness the Lord Protector and his Council, passed an Ordinance for adjourning of Easter Term, from and after the first Return thereof, called Quindena Pasch, until the first Return of Trinity Term, called Crastino Trinatatis. Dalkieth, April 3. Cap. Sherwin Commander of the Primrose, and Cap. Smith Commander of the Duchess, in their return from Orkney, took a Dutch vessel laden with French and Spanish Wines, linen Cloth, and other good commodities, bound for the West Indies; they sent her into Aberdeen. Some young Lairds and others purposing to glean a party of horse in Lothian, and repair to the enemy, are taken, and brought hither prisoners. Aberdeen, April 1. The Earl of Athol is come to Glencarn with about 700 horse and foot, Seaford and some new raised forces are daily expected to join with them. Glencarn with his whole force, consisting of 2000 horse and foot, is at Dingwel, two miles from Brahan, not undeserving the name of an Island, so that we hope to engage them there. In order whereunto Lieut. Col. Mitchell is marched towards Inverness with 9 companies of Foot, and Col. Morgan hath followed him with 5 troops of Col Rich his Regiment, and 4 troops of Dragoons; he intends to take Col. Tomlinson's Regiment, which is in his way, and to draw 5 companies of Foot out of Inverness. From Cows in the Isle of Wight, April 6. A private man of War hath, about two days since, taken and brought in hither two French vessels, one of which is laden with Salt, the other hath but little except ballast; Our Fleet is for the most part near St. Helens point and the rest as the Spits head, being in all near 100 sail, gallant ships, and bravely accommodated. One of our Frigates hath taken a Holland ship, and carried her to Portsmouth; she hath in her 8 Bales of Paper, and some small quantity of Indico. Many ships that were here, went away yesterday morning towards the Downs; and several Merchants' ships are at present here in this road, being detained by contrary winds; they expect some favourable Easterly gales, that so they may proceed on their intended voyages. Deal, April 7. A man of War of ours is this morning gone for Holland, to get the Ratification of the Peace made with them, and an Express from the Dutch Ambassador, touching the Agreement. Most part of the ships which remained in this Road, are gone up into the River of Thames; here is only some few left that are bound to the Southward. A Fleet consisting of about 40 or 50 sail of ships, great and small, passed by this place, which we suppose to be the Dunkirk fleet bound for London. Because many will not give credit to the Agreement of Peace between the Commonwealths of England and Holland, (though their Unbelief proceeds from several causes, some prejudicately fearing the worst, and others wishing and desiring rather than the Fountain of Blood may still be open) We can, and do assure you, That the Articles (as we said before) were signed and sealed by the Commissioners on both sides, on Wednesday night last, and within 14 days are to be signed and sealed by the Lord Protector, and the States of Holland, and then to publicly proclaimed and published, both in England and Holland in one day. The Agreement with Denmark is also taken in upon the Articles: And for satisfaction of the loss which our English Merchants sustained by that King's command, whose demands amount to about 150000l. it is referred to four Merchants, two whereof to be English, and the other two Dutch; which four Merchants shall have absolute power to determine those demands within the space of twenty days; the place where they are to sit, is Guildhall. As touching the business of Amboyna, it is referred to eight Commissioners, who have six months time to agree thereon, and in case they agree not, then Umpires are nominated to determine that business. Let those that delight themselves in blood, have blood to drink, for they are worthy. From Legorn, March 23. thus. This week in the sight of this City was a sore fight between two ships at Sea, the one Dutchman of War of 32 guns, and the other an English ship called the Expedition, who came from Zant with Currans; the fight lasted 6 hours, but night having parted them, both ships sunk; most of the men were saved, but nothing else, though the fight was near the shore. It is advertised from Cullen, That the Treaty between that Elector and the Spanish Commissioners, is brought to perfection, and signed, which is, That both French and Spanish shall have free passage through the Country of Liege, not committing any acts of hostility upon each other; and the Spaniards in point of satisfaction for the losses received from them and the Lorrainers, shall pay to the said Elector 200000 Rixdollars out of the Duke of Lorrain's estate, and for security of performance, the Lordship of Kerpen, and another in Gulick shall be put into his hands until full payment. From Poland thus. The General of the Cossacks hath delivered up three very considerable places to the Muscovite, and caused himself to be re baptized after the Muscovia manner, which is so ill resented by all sorts of people in that Country, that the Commanders sent to the King of Poland, That if he pleased to send them a general pardon for what they had done, and the rest of the Army, they will return with the major part of the Army into his Majesty's service; which hath so incensed the General, that having caused them to be apprehended he hath made each of them shorter by the head, which hath caused much heart burning among the people. Whereas many abuses and corruptions are crept into the ordinary course and administration of Justice, both in Law and Equity, the reformation whereof hath not yet been attained; Out of a tender care and desire that so necessary and good a work may at length be brought to effect, it is held convenient that so necessary and good a work may at length be brought to effect, it is held convenient that so necessary and good a work may at length be brought to effect, it is held convenient and necessary to adjourn part of the next Term of Easter; be if therefore Ordained by his Highness the Lord Protector, by and with the consent of his Council, That part of the said Term of Easter now next coming be adjourned, that is to say, from and after the first Return, called Quindena Pasch, unto the last Return of the said Easter Term, called Crastino Ascensionis; And all and every person or persons, which have cause, or commandment to appear in any of the Courts at Westminster, in or at any day or time, from and after the said Return, called Quindena Pasch, may tarry at their dwellings, or where their business shall lie, without resorting to any of the said Courts for that Cause, until the said last Return, called Crastino Ascensionis, without danger or forfeiture, penalty or contempt to be in that behalf. And be it also ordained by the Authority aforesaid, That Writs of Adjournment shall be directed to the Justices of the said Courts, and Barons of the Exchequer, giving them authority to adjourn the said part of the said Term of Easter, as aforesaid, that is to say, from and after the said first Return called Quindena Pasch, until the said last Return of the said Term, called Crastino Ascensionis, as before is said, and the said adjournment shall be made, as aforesaid. And be it further Ordained, That all Matters, Causes and Suits, depending in any of the said Courts, shall have continuance, and the parties shall have day, from the day of the said Adjournment, until the said Return of Crastino Ascensionis, as is aforesaid; and the Lord's Commissioners of the Great Seal are required to issue forth Writs accordingly. And be it further Ordained, That a former Ordinance of the sixth day of this instant April, for the Adjourning of part of the said Term, until the first Return of Trinity Term next, called Crastino Trinitatis, be from henceforth Repealed and void. And it is lastly Ordained by the Authority aforesaid, That the Sheriffs of London and Middlesex, and all other Sheriffs both in England and Wales, do forthwith proclaim and publish this Ordinance in the chief Market Towns and usual places within their several and respective Counties. Lieutenant Colonel John Lilburn being said to have again attempted something against the State, is removed from the Tower to be prisoner in some more remote place. The titular King of Scots is still at Paris, and of late something more merry than ordinary. The Deputies for Languedoc telling him, that if there were a Peace concluded with England, it would be well for all the Protestants in France; He made answer that he was glad of it, for it would then be the better for himself. This day was the Gaol delivery; three were hanged, one whereof died most desperately, and going up the Cart, drank a health to the Devil's Majesty: One was pressed last Saturday, and being afterwards heard to groan, was carried down to the Press-yard again to have the execution dispatched. The Commissioners for Approbation of public Ministers, sate at Whitehall, and divers Certificates were presented unto them in behalf of several particular persons, for approbation; and in regard that none hereafter should out of carelessness of partiality set their hands to a Certificate for any person that hereafter should out of carelessness or partiality let their hands to a Certificate for any person that hereafter may be found unworthy to be admitted, and so become prejudicial to the Church of Christ, and frustrate the intentions of our Governors which made this Ordinance; the said Commissioners do earnestly beseech all whom it may concern (in the bowels of Christ) as they tender the honour of the great God himself, whose servants we all are, the prejudice of the souls of his people purchased by the blood of his Son, the advancement and propagation of his Gospel, through all the parts of this Land and Nation, whereunto we belong, so to lend assistance both of their fervent prayers, and due informations, that thereby the work may be carried on more prosperously, and the Commissioners more encouraged to attend it. Signed in the name, and at the request of the Commissioners for Approbation of public Preachers. By Francis Rouse, Io. Arrowsmith. William Goss. Stephen Marshall. The last Letters from Edinburgh speak of another Engagement betwixt Col. Morgan, and the Enemy; but they tell us not the particulars, only they say, that the Enemy is once more dispersed, and driven further up into the mountains, with the loss of about 200 men. The peace with Holland being concluded (as you heard before) our Merchants are lading of goods on shipboard, as fast as Lighters can be gotten to carry them where the ships ride at anchor. We likewise hear of the like preparations in Holland for transporting of goods of several sorts hither. And now all the rest of Europe are at a stand, or at leastwise stand gazing upon us, and begin to cast about with themselves, what action may be great and considerable enough for to be undertaken next by those great Fleets, which are as ready for action as any opportunity can be to offer itself. How they will be disposed of Time will discover. London, Printed by E. Alsop 1654.", 'title': 'A Perfect Account, Issue 170'} ``` ### Data Fields ``` { "id": Unique identifier for that data point("string"), "text": Text in that datapoint("string"), "title": The title of the news article("string") } ``` ### Data Splits Train: 303 ## Dataset Creation ### Curation Rationale The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. ### Source Data #### Initial Data Collection and Normalization This corpus was created by the Department of Linguistics and English Language, Lancaster University. #### Who are the source language producers? The original data was humna-generated from existing newsbooks ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information None, since this dataset is from 1654 ## Considerations for Using the Data ### Social Impact of Dataset This dataset provides an insight into the news and social systems from 17th century England ### Discussion of Biases The dataset is from the 17th century and some articles might reflect social biases of the time in terms of sexuality, gender, race, etc. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators This corpus was created by the Department of Linguistics and English Language, Lancaster University. Project leader: Tony McEnery Corpus editor: Andrew Hardie ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License ### Citation Information @misc{20.500.12024/2531, title = {The Lancaster Newsbooks Corpus}, author = {Thomason, George, d. 1666}, url = {http://hdl.handle.net/20.500.12024/2531}, note = {Oxford Text Archive}, copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.}, year = {2005} }
biglam/lancaster_newsbooks
[ "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "newsbooks", "1654", "lancaster", "oxford text", "region:us" ]
2022-07-19T18:48:58+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "Lancaster Newsbooks", "tags": ["newsbooks", "1654", "lancaster", "oxford text"]}
2022-08-18T15:03:54+00:00
[]
[ "en" ]
TAGS #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-3.0 #newsbooks #1654 #lancaster #oxford text #region-us
# Dataset Card for lancaster_newsbooks ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: Tony McEnery ### Dataset Summary This corpus consists of two collections of seventeenth-century English "newsbooks". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy. The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. This is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally. For more information about the corpus, see URL ### Supported Tasks and Leaderboards 'text-classification': This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods ### Languages The language in this dataset is English from 1654. The associated BCP-47 code is 'en:GB' ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits Train: 303 ## Dataset Creation ### Curation Rationale The FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project "Looking at text re-use in a corpus of seventeenth-century news reportage", funded by the British Academy, grant reference SG-33825. The SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook "Mercurius Fumigosus", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project "Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655", funded by the British Academy, grant reference LRG-35423. ### Source Data #### Initial Data Collection and Normalization This corpus was created by the Department of Linguistics and English Language, Lancaster University. #### Who are the source language producers? The original data was humna-generated from existing newsbooks ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information None, since this dataset is from 1654 ## Considerations for Using the Data ### Social Impact of Dataset This dataset provides an insight into the news and social systems from 17th century England ### Discussion of Biases The dataset is from the 17th century and some articles might reflect social biases of the time in terms of sexuality, gender, race, etc. ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators This corpus was created by the Department of Linguistics and English Language, Lancaster University. Project leader: Tony McEnery Corpus editor: Andrew Hardie ### Licensing Information Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License @misc{20.500.12024/2531, title = {The Lancaster Newsbooks Corpus}, author = {Thomason, George, d. 1666}, url = {URL note = {Oxford Text Archive}, copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.}, year = {2005} }
[ "# Dataset Card for lancaster_newsbooks", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Tony McEnery", "### Dataset Summary\n\nThis corpus consists of two collections of seventeenth-century English \"newsbooks\". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy.\n\nThe FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project \"Looking at text re-use in a corpus of seventeenth-century news reportage\", funded by the British Academy, grant reference SG-33825. \n\nThe SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook \"Mercurius Fumigosus\", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project \"Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655\", funded by the British Academy, grant reference LRG-35423. \n\nThis is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally.\n\nFor more information about the corpus, see URL", "### Supported Tasks and Leaderboards\n\n'text-classification': This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods", "### Languages\n\nThe language in this dataset is English from 1654. The associated BCP-47 code is 'en:GB'", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\nTrain: 303", "## Dataset Creation", "### Curation Rationale\n\nThe FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project \"Looking at text re-use in a corpus of seventeenth-century news reportage\", funded by the British Academy, grant reference SG-33825. \nThe SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook \"Mercurius Fumigosus\", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project \"Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655\", funded by the British Academy, grant reference LRG-35423.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThis corpus was created by the Department of Linguistics and English Language, Lancaster University.", "#### Who are the source language producers?\n\nThe original data was humna-generated from existing newsbooks", "### Annotations", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information\n\nNone, since this dataset is from 1654", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset provides an insight into the news and social systems from 17th century England", "### Discussion of Biases\n\nThe dataset is from the 17th century and some articles might reflect social biases of the time in terms of sexuality, gender, race, etc.", "### Other Known Limitations\n\n[N/A]", "## Additional Information", "### Dataset Curators\n\nThis corpus was created by the Department of Linguistics and English Language, Lancaster University.\n\nProject leader: Tony McEnery\nCorpus editor: Andrew Hardie", "### Licensing Information\n\nCreative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License\n\n\n\n @misc{20.500.12024/2531,\n title = {The Lancaster Newsbooks Corpus},\n author = {Thomason, George, d. 1666},\n url = {URL\n note = {Oxford Text Archive},\n copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.},\n year = {2005} }" ]
[ "TAGS\n#annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-3.0 #newsbooks #1654 #lancaster #oxford text #region-us \n", "# Dataset Card for lancaster_newsbooks", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Tony McEnery", "### Dataset Summary\n\nThis corpus consists of two collections of seventeenth-century English \"newsbooks\". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy.\n\nThe FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project \"Looking at text re-use in a corpus of seventeenth-century news reportage\", funded by the British Academy, grant reference SG-33825. \n\nThe SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook \"Mercurius Fumigosus\", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project \"Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655\", funded by the British Academy, grant reference LRG-35423. \n\nThis is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally.\n\nFor more information about the corpus, see URL", "### Supported Tasks and Leaderboards\n\n'text-classification': This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods", "### Languages\n\nThe language in this dataset is English from 1654. The associated BCP-47 code is 'en:GB'", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\nTrain: 303", "## Dataset Creation", "### Curation Rationale\n\nThe FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project \"Looking at text re-use in a corpus of seventeenth-century news reportage\", funded by the British Academy, grant reference SG-33825. \nThe SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook \"Mercurius Fumigosus\", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project \"Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655\", funded by the British Academy, grant reference LRG-35423.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThis corpus was created by the Department of Linguistics and English Language, Lancaster University.", "#### Who are the source language producers?\n\nThe original data was humna-generated from existing newsbooks", "### Annotations", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information\n\nNone, since this dataset is from 1654", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset provides an insight into the news and social systems from 17th century England", "### Discussion of Biases\n\nThe dataset is from the 17th century and some articles might reflect social biases of the time in terms of sexuality, gender, race, etc.", "### Other Known Limitations\n\n[N/A]", "## Additional Information", "### Dataset Curators\n\nThis corpus was created by the Department of Linguistics and English Language, Lancaster University.\n\nProject leader: Tony McEnery\nCorpus editor: Andrew Hardie", "### Licensing Information\n\nCreative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License\n\n\n\n @misc{20.500.12024/2531,\n title = {The Lancaster Newsbooks Corpus},\n author = {Thomason, George, d. 1666},\n url = {URL\n note = {Oxford Text Archive},\n copyright = {Distributed by the University of Oxford under a Creative Commons Attribution-{NonCommercial}-{ShareAlike} 3.0 Unported License.},\n year = {2005} }" ]
[ 85, 11, 112, 29, 341, 44, 28, 6, 6, 5, 9, 5, 232, 4, 31, 23, 5, 10, 14, 19, 8, 24, 41, 12, 5, 40, 110 ]
[ "passage: TAGS\n#annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-3.0 #newsbooks #1654 #lancaster #oxford text #region-us \n# Dataset Card for lancaster_newsbooks## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Tony McEnery", "passage: ### Dataset Summary\n\nThis corpus consists of two collections of seventeenth-century English \"newsbooks\". Both were drawn from the Thomason Tracts collection, which is held at the British Library and available in graphical form via Early English Books Online (EEBO). The construction of these keyboarded versions were in both cases funded by the British Academy.\n\nThe FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654). This was constructed for the project \"Looking at text re-use in a corpus of seventeenth-century news reportage\", funded by the British Academy, grant reference SG-33825. \n\nThe SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook \"Mercurius Fumigosus\", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project \"Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655\", funded by the British Academy, grant reference LRG-35423. \n\nThis is version 1.0 of the corpus, released April 2007; it supercedes earlier versions circulated informally.\n\nFor more information about the corpus, see URL### Supported Tasks and Leaderboards\n\n'text-classification': This dataset can be used to augent existing datasets to find stylistic differences between texts from different time periods### Languages\n\nThe language in this dataset is English from 1654. The associated BCP-47 code is 'en:GB'## Dataset Structure### Data Instances### Data Fields### Data Splits\n\nTrain: 303## Dataset Creation### Curation Rationale\n\nThe FIRST collection (1654_newsbooks) consists of every newsbook published in London and still surviving in the Thomason Tracts from the first half of 1654 (to be precise, for the second half of December 1653 to the end of May 1654, with one or two additions from the first week in June, 1654) and was constructed for the project \"Looking at text re-use in a corpus of seventeenth-century news reportage\", funded by the British Academy, grant reference SG-33825. \nThe SECOND collection (mercurius_fumigosus) consists of every surviving issue published of the highly idiosyncratic newsbook \"Mercurius Fumigosus\", written by John Crouch between summer 1654 and early autumn 1655. This was constructed for the project \"Decoding the news - Mercurius Fumigosus as a source of news in the interregnum, 1654-1655\", funded by the British Academy, grant reference LRG-35423.### Source Data#### Initial Data Collection and Normalization\n\nThis corpus was created by the Department of Linguistics and English Language, Lancaster University.#### Who are the source language producers?\n\nThe original data was humna-generated from existing newsbooks### Annotations#### Annotation process\n\n[N/A]#### Who are the annotators?\n\n[N/A]### Personal and Sensitive Information\n\nNone, since this dataset is from 1654## Considerations for Using the Data### Social Impact of Dataset\n\nThis dataset provides an insight into the news and social systems from 17th century England" ]
6c18754cc3af5656edef386b34f37ef496788a33
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-7328461a-11225503
[ "autotrain", "evaluation", "region:us" ]
2022-07-19T20:48:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": ["perplexity"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-07-19T21:01:15+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ 13, 96, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
97d2dd14602e380348a4f29f4441e70a01858e1f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-73f27c66-11235504
[ "autotrain", "evaluation", "region:us" ]
2022-07-19T20:54:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": ["perplexity"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-21T04:32:04+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ 13, 99, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
0a6c80f0a7934f718fcc0d7b2f22fdf9440b231f
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-zh
[ "region:us" ]
2022-07-19T23:42:55+00:00
{}
2024-01-31T23:56:24+00:00
[]
[]
TAGS #region-us
## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit URL !image The links to the SynthDoG-generated datasets are here: - 'synthdog-en': English, 0.5M. - 'synthdog-zh': Chinese, 0.5M. - 'synthdog-ja': Japanese, 0.5M. - 'synthdog-ko': Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details. ## How to Cite If you find this work useful to you, please cite:
[ "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ "TAGS\n#region-us \n", "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ 6, 140, 17 ]
[ "passage: TAGS\n#region-us \n## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.## How to Cite\n\nIf you find this work useful to you, please cite:" ]
5c895d0deb129102f9c2fe279eb456548e261c8a
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-ja
[ "region:us" ]
2022-07-19T23:45:12+00:00
{}
2024-01-31T23:56:09+00:00
[]
[]
TAGS #region-us
## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit URL !image The links to the SynthDoG-generated datasets are here: - 'synthdog-en': English, 0.5M. - 'synthdog-zh': Chinese, 0.5M. - 'synthdog-ja': Japanese, 0.5M. - 'synthdog-ko': Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details. ## How to Cite If you find this work useful to you, please cite:
[ "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ "TAGS\n#region-us \n", "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ 6, 140, 17 ]
[ "passage: TAGS\n#region-us \n## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.## How to Cite\n\nIf you find this work useful to you, please cite:" ]
1e6c76a1a5f10aa967a60a880f7dbc06ac29a8d6
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-ko
[ "region:us" ]
2022-07-19T23:45:45+00:00
{}
2024-01-31T23:55:41+00:00
[]
[]
TAGS #region-us
## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit URL !image The links to the SynthDoG-generated datasets are here: - 'synthdog-en': English, 0.5M. - 'synthdog-zh': Chinese, 0.5M. - 'synthdog-ja': Japanese, 0.5M. - 'synthdog-ko': Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details. ## How to Cite If you find this work useful to you, please cite:
[ "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ "TAGS\n#region-us \n", "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ 6, 140, 17 ]
[ "passage: TAGS\n#region-us \n## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.## How to Cite\n\nIf you find this work useful to you, please cite:" ]
961402a28a0c436af83eab460132148053441208
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModel.from_pretrained("bigscience/bloom")
Willaim/H
[ "region:us" ]
2022-07-20T01:48:57+00:00
{}
2022-07-20T01:50:07+00:00
[]
[]
TAGS #region-us
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModel.from_pretrained("bigscience/bloom")
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
8acfecc725b172d1283aa50f67521ddc08b3c682
# ShahNegar (A Plotted version of The Shahnameh) This dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka [craiyon](https://www.craiyon.com/)). You can use this dataset using the code below: ```python from datasets import load_dataset dataset = load_dataset("sadrasabouri/ShahNegar") ``` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** - **Point of Contact:** [Sadra Sabouri](mailto:[email protected]) ### Dataset Summary This dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same `id` field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images. ### Supported Tasks and Leaderboards The main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks: + text-to-image + image-to-text (image captioning) ### Languages The Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - [satoor](https://www.sattor.com/english/Shahnameh.pdf) - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible. ## Dataset Structure ### Data Fields Here is an instance of our dataset: ```json { "image": <PIL Image Bytes>, "id": 0, "text": "He took up his abode in the mountains, and clad himself and his people in tiger-skins, and from him sprang all kindly nurture and the arts of clothing, till then unknown." } ``` + `image`: the image for given text. + `id`: the id for the text (**Not for the image**). + `text`: the English text for the image. ### Data Splits This dataset has only a split (`train` split). ## Dataset Creation The translated version of the Shahnameh was generally derived from the [satoor](https://www.sattor.com/english/Shahnameh.pdf) website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images. ### Annotations #### Annotation process Through the process of image generation, we noticed a bias in the DALL-E models towards the word `iran`. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context. #### Who are the annotators? Mahsa Namdar and Sadra Sabouri were the annotators of this dataset. ### Personal and Sensitive Information Since the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks. ### Discussion of Biases The dataset's possible biases would come from the DALL-E mini biases. It's actually a good practice to check the dataset entries in order to find biases in that model. One it's worth mentioning in this work is the DALL-E mini model's bias for the word `iran` which nearly always comes up with images from political figures of this country. ### Other Known Limitations There are constant debates in the literature about the limitations of machine-generated datasets. Some believe that since nowadays models are not perfect - and so do their output, it wouldn't be a good idea to use these artificially generated datasets as input to the new model. They suggest that by doing so we are actually limiting our accuracy by the model's accuracy which provided the primary dataset. ## Additional Information ### Dataset Curators + Emad Fatemizadeh: The general idea for generating a graphical version of Farsi poems was firstly introduced by him. + Sadra Sabouri: He looked up a translated version of the Shahnameh, extract and tokenized poems from it, and used the online DALL-E mini API to generate images from poems. + Mahsa Namdar: The process of annotation as a post-process on data has been held by her. ### Licensing Information MIT ### Citation Information [More Information Needed] ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.
sadrasabouri/ShahNegar
[ "task_categories:image-to-text", "task_categories:text-to-image", "task_ids:image-captioning", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-07-20T04:13:00+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-to-text", "text-to-image"], "task_ids": ["image-captioning"], "pretty_name": "ShahNegar"}
2022-10-21T10:54:05+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-text #task_categories-text-to-image #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us
# ShahNegar (A Plotted version of The Shahnameh) This dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka craiyon). You can use this dataset using the code below: ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Paper: - Point of Contact: Sadra Sabouri ### Dataset Summary This dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same 'id' field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images. ### Supported Tasks and Leaderboards The main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks: + text-to-image + image-to-text (image captioning) ### Languages The Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - satoor - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible. ## Dataset Structure ### Data Fields Here is an instance of our dataset: + 'image': the image for given text. + 'id': the id for the text (Not for the image). + 'text': the English text for the image. ### Data Splits This dataset has only a split ('train' split). ## Dataset Creation The translated version of the Shahnameh was generally derived from the satoor website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images. ### Annotations #### Annotation process Through the process of image generation, we noticed a bias in the DALL-E models towards the word 'iran'. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context. #### Who are the annotators? Mahsa Namdar and Sadra Sabouri were the annotators of this dataset. ### Personal and Sensitive Information Since the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks. ### Discussion of Biases The dataset's possible biases would come from the DALL-E mini biases. It's actually a good practice to check the dataset entries in order to find biases in that model. One it's worth mentioning in this work is the DALL-E mini model's bias for the word 'iran' which nearly always comes up with images from political figures of this country. ### Other Known Limitations There are constant debates in the literature about the limitations of machine-generated datasets. Some believe that since nowadays models are not perfect - and so do their output, it wouldn't be a good idea to use these artificially generated datasets as input to the new model. They suggest that by doing so we are actually limiting our accuracy by the model's accuracy which provided the primary dataset. ## Additional Information ### Dataset Curators + Emad Fatemizadeh: The general idea for generating a graphical version of Farsi poems was firstly introduced by him. + Sadra Sabouri: He looked up a translated version of the Shahnameh, extract and tokenized poems from it, and used the online DALL-E mini API to generate images from poems. + Mahsa Namdar: The process of annotation as a post-process on data has been held by her. ### Licensing Information MIT ### Contributions Thanks to @sadrasabouri for adding this dataset.
[ "# ShahNegar (A Plotted version of The Shahnameh)\n\nThis dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka craiyon). You can use this dataset using the code below:", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Paper:\n- Point of Contact: Sadra Sabouri", "### Dataset Summary\n\nThis dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same 'id' field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images.", "### Supported Tasks and Leaderboards\n\nThe main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks:\n+ text-to-image\n+ image-to-text (image captioning)", "### Languages\n\nThe Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - satoor - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible.", "## Dataset Structure", "### Data Fields\n\nHere is an instance of our dataset:\n\n\n+ 'image': the image for given text.\n+ 'id': the id for the text (Not for the image).\n+ 'text': the English text for the image.", "### Data Splits\n\nThis dataset has only a split ('train' split).", "## Dataset Creation\n\nThe translated version of the Shahnameh was generally derived from the satoor website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images.", "### Annotations", "#### Annotation process\n\nThrough the process of image generation, we noticed a bias in the DALL-E models towards the word 'iran'. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context.", "#### Who are the annotators?\n\nMahsa Namdar and Sadra Sabouri were the annotators of this dataset.", "### Personal and Sensitive Information\n\nSince the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks.", "### Discussion of Biases\n\nThe dataset's possible biases would come from the DALL-E mini biases. It's actually a good practice to check the dataset entries in order to find biases in that model. One it's worth mentioning in this work is the DALL-E mini model's bias for the word 'iran' which nearly always comes up with images from political figures of this country.", "### Other Known Limitations\n\nThere are constant debates in the literature about the limitations of machine-generated datasets. Some believe that since nowadays models are not perfect - and so do their output, it wouldn't be a good idea to use these artificially generated datasets as input to the new model. They suggest that by doing so we are actually limiting our accuracy by the model's accuracy which provided the primary dataset.", "## Additional Information", "### Dataset Curators\n\n+ Emad Fatemizadeh: The general idea for generating a graphical version of Farsi poems was firstly introduced by him.\n+ Sadra Sabouri: He looked up a translated version of the Shahnameh, extract and tokenized poems from it, and used the online DALL-E mini API to generate images from poems.\n+ Mahsa Namdar: The process of annotation as a post-process on data has been held by her.", "### Licensing Information\n\nMIT", "### Contributions\n\nThanks to @sadrasabouri for adding this dataset." ]
[ "TAGS\n#task_categories-image-to-text #task_categories-text-to-image #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us \n", "# ShahNegar (A Plotted version of The Shahnameh)\n\nThis dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka craiyon). You can use this dataset using the code below:", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Paper:\n- Point of Contact: Sadra Sabouri", "### Dataset Summary\n\nThis dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same 'id' field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images.", "### Supported Tasks and Leaderboards\n\nThe main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks:\n+ text-to-image\n+ image-to-text (image captioning)", "### Languages\n\nThe Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - satoor - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible.", "## Dataset Structure", "### Data Fields\n\nHere is an instance of our dataset:\n\n\n+ 'image': the image for given text.\n+ 'id': the id for the text (Not for the image).\n+ 'text': the English text for the image.", "### Data Splits\n\nThis dataset has only a split ('train' split).", "## Dataset Creation\n\nThe translated version of the Shahnameh was generally derived from the satoor website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images.", "### Annotations", "#### Annotation process\n\nThrough the process of image generation, we noticed a bias in the DALL-E models towards the word 'iran'. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context.", "#### Who are the annotators?\n\nMahsa Namdar and Sadra Sabouri were the annotators of this dataset.", "### Personal and Sensitive Information\n\nSince the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks.", "### Discussion of Biases\n\nThe dataset's possible biases would come from the DALL-E mini biases. It's actually a good practice to check the dataset entries in order to find biases in that model. One it's worth mentioning in this work is the DALL-E mini model's bias for the word 'iran' which nearly always comes up with images from political figures of this country.", "### Other Known Limitations\n\nThere are constant debates in the literature about the limitations of machine-generated datasets. Some believe that since nowadays models are not perfect - and so do their output, it wouldn't be a good idea to use these artificially generated datasets as input to the new model. They suggest that by doing so we are actually limiting our accuracy by the model's accuracy which provided the primary dataset.", "## Additional Information", "### Dataset Curators\n\n+ Emad Fatemizadeh: The general idea for generating a graphical version of Farsi poems was firstly introduced by him.\n+ Sadra Sabouri: He looked up a translated version of the Shahnameh, extract and tokenized poems from it, and used the online DALL-E mini API to generate images from poems.\n+ Mahsa Namdar: The process of annotation as a post-process on data has been held by her.", "### Licensing Information\n\nMIT", "### Contributions\n\nThanks to @sadrasabouri for adding this dataset." ]
[ 102, 74, 116, 17, 101, 56, 76, 6, 53, 19, 90, 5, 98, 29, 75, 8, 84, 98, 101, 5, 111, 7, 18 ]
[ "passage: TAGS\n#task_categories-image-to-text #task_categories-text-to-image #task_ids-image-captioning #annotations_creators-machine-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us \n# ShahNegar (A Plotted version of The Shahnameh)\n\nThis dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka craiyon). You can use this dataset using the code below:## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Paper:\n- Point of Contact: Sadra Sabouri### Dataset Summary\n\nThis dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same 'id' field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images.### Supported Tasks and Leaderboards\n\nThe main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks:\n+ text-to-image\n+ image-to-text (image captioning)", "passage: ### Languages\n\nThe Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - satoor - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible.## Dataset Structure### Data Fields\n\nHere is an instance of our dataset:\n\n\n+ 'image': the image for given text.\n+ 'id': the id for the text (Not for the image).\n+ 'text': the English text for the image.### Data Splits\n\nThis dataset has only a split ('train' split).## Dataset Creation\n\nThe translated version of the Shahnameh was generally derived from the satoor website. We first extracted texts from the pdf. After that, we divided paragraphs into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentence. After a few annotations, we came up with more than 30000 images.### Annotations#### Annotation process\n\nThrough the process of image generation, we noticed a bias in the DALL-E models towards the word 'iran'. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context.#### Who are the annotators?\n\nMahsa Namdar and Sadra Sabouri were the annotators of this dataset.### Personal and Sensitive Information\n\nSince the textual data is easily downloadable and the images were generated through an image generation model there shouldn't be any personal information in this dataset. Just in case you find something harmful or violating of one's personal information please let us know. We will take proper action as soon as possible.## Considerations for Using the Data### Social Impact of Dataset\n\nThis dataset is mainly aimed to release for its artistic value. The process of generating images for the Shahnameh - which is one of the most important Farsi poem books - is our precious contribution. This dataset is not only used for this purpose but also can as a dataset in image-to-text and text-to-image tasks." ]
471fb121de4f1806d7f0fd4dde685089c9cb2012
## Donut 🍩 : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit https://github.com/clovaai/donut ![image](https://github.com/clovaai/donut/blob/master/misc/sample_synthdog.png?raw=true) The links to the SynthDoG-generated datasets are here: - [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en): English, 0.5M. - [`synthdog-zh`](https://huggingface.co/datasets/naver-clova-ix/synthdog-zh): Chinese, 0.5M. - [`synthdog-ja`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ja): Japanese, 0.5M. - [`synthdog-ko`](https://huggingface.co/datasets/naver-clova-ix/synthdog-ko): Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see `./synthdog/README.md` and [our paper](#how-to-cite) for details. ## How to Cite If you find this work useful to you, please cite: ```bibtex @inproceedings{kim2022donut, title = {OCR-Free Document Understanding Transformer}, author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} } ```
naver-clova-ix/synthdog-en
[ "region:us" ]
2022-07-20T04:33:24+00:00
{}
2024-01-31T23:56:41+00:00
[]
[]
TAGS #region-us
## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets For more information, please visit URL !image The links to the SynthDoG-generated datasets are here: - 'synthdog-en': English, 0.5M. - 'synthdog-zh': Chinese, 0.5M. - 'synthdog-ja': Japanese, 0.5M. - 'synthdog-ko': Korean, 0.5M. To generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details. ## How to Cite If you find this work useful to you, please cite:
[ "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ "TAGS\n#region-us \n", "## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.", "## How to Cite\n\nIf you find this work useful to you, please cite:" ]
[ 6, 140, 17 ]
[ "passage: TAGS\n#region-us \n## Donut : OCR-Free Document Understanding Transformer (ECCV 2022) -- SynthDoG datasets\n\nFor more information, please visit URL\n\n!image\n\nThe links to the SynthDoG-generated datasets are here:\n\n- 'synthdog-en': English, 0.5M.\n- 'synthdog-zh': Chinese, 0.5M.\n- 'synthdog-ja': Japanese, 0.5M.\n- 'synthdog-ko': Korean, 0.5M.\n\nTo generate synthetic datasets with our SynthDoG, please see './synthdog/URL' and our paper for details.## How to Cite\n\nIf you find this work useful to you, please cite:" ]
a5057855c7aa264709b35de7bd85258d943bec22
This Urdu sentiment dataset was formed by concatenating the following two datasets: https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus https://www.kaggle.com/datasets/akkefa/imdb-dataset-of-50k-movie-translated-urdu-reviews
hassan4830/urdu-binary-classification-data
[ "license:afl-3.0", "region:us" ]
2022-07-20T04:56:40+00:00
{"license": "afl-3.0"}
2022-07-21T08:40:56+00:00
[]
[]
TAGS #license-afl-3.0 #region-us
This Urdu sentiment dataset was formed by concatenating the following two datasets: URL URL
[]
[ "TAGS\n#license-afl-3.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-afl-3.0 #region-us \n" ]
59519e655088aa83999037b3ba8fa88d77eb3b83
annotations_creators: [] language: - en language_creators: [] license: [] multilinguality: [] pretty_name: HuggingFace GitHub Issues size_categories: [] source_datasets: [] tags: [] task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification - document-retrieval
SakaiJun/github-issues
[ "region:us" ]
2022-07-20T06:23:42+00:00
{}
2022-07-20T06:37:59+00:00
[]
[]
TAGS #region-us
annotations_creators: [] language: - en language_creators: [] license: [] multilinguality: [] pretty_name: HuggingFace GitHub Issues size_categories: [] source_datasets: [] tags: [] task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification - document-retrieval
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
fd526b15b744502f4e24b21126f543d845a8c59e
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/fashion_mnist_quality_drift
[ "task_categories:image-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
2022-07-20T06:31:58+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "sentiment-classification-reviews-with-drift"}
2022-10-25T09:40:17+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us
# Dataset Card for 'reviews_with_drift' ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place. ### Supported Tasks and Leaderboards 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @fjcasti1 for adding this dataset.
[ "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n", "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ 95, 13, 125, 4, 120, 50, 12, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n# Dataset Card for 'reviews_with_drift'## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).### Languages\n\nText is mainly written in english.## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information" ]
8116d3b3bedf70dcc6f755e461f5ab499ef13e18
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-ilpost * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@dishant16](https://huggingface.co/dishant16) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-6cd6bf3a-11245505
[ "autotrain", "evaluation", "region:us" ]
2022-07-20T06:44:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-ilpost", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-20T06:53:57+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-ilpost * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @dishant16 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-ilpost\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @dishant16 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-ilpost\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @dishant16 for evaluating this model." ]
[ 13, 87, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-ilpost\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @dishant16 for evaluating this model." ]
9e3c700a884eb823b3b6c9bd993f3197cdfdacb6
# Dataset Card for asvspoof2019 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://datashare.ed.ac.uk/handle/10283/3336 - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/abs/1911.01601 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This is a database used for the Third Automatic Speaker Verification Spoofing and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org) organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman, and Andreas Nautsch in 2019. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances ``` {'speaker_id': 'LA_0091', 'audio_file_name': 'LA_T_8529430', 'audio': {'path': 'D:/Users/80304531/.cache/huggingface/datasets/downloads/extracted/8cabb6d5c283b0ed94b2219a8d459fea8e972ce098ef14d8e5a97b181f850502/LA/ASVspoof2019_LA_train/flac/LA_T_8529430.flac', 'array': array([-0.00201416, -0.00234985, -0.0022583 , ..., 0.01309204, 0.01339722, 0.01461792], dtype=float32), 'sampling_rate': 16000}, 'system_id': 'A01', 'key': 1} ``` ### Data Fields Logical access (LA): - `speaker_id`: `LA_****`, a 4-digit speaker ID - `audio_file_name`: name of the audio file - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `system_id`: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-') - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech Physical access (PA): - `speaker_id`: `PA_****`, a 4-digit speaker ID - `audio_file_name`: name of the audio file - `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `environment_id`: a triplet (S,R,D_s), which take one letter in the set {a,b,c} as categorical value, defined as | | a | b | c | | -------------------------------- | ------ | ------- | -------- | | S: Room size (square meters) | 2-5 | 5-10 | 10-20 | | R: T60 (ms) | 50-200 | 200-600 | 600-1000 | | D_s: Talker-to-ASV distance (cm) | 10-50 | 50-100 | 100-150 | - `attack_id`: a duple (D_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as | | A | B | C | | ----------------------------------- | ------- | ------ | ----- | | Z: Attacker-to-talker distance (cm) | 10-50 | 50-100 | > 100 | | Q: Replay device quality | perfect | high | low | for bonafide speech, `attack_id` is left blank ('-') - `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech ### Data Splits | | Training set | Development set | Evaluation set | | -------- | ------------ | --------------- | -------------- | | Bonafide | 2580 | 2548 | 7355 | | Spoof | 22800 | 22296 | 63882 | | Total | 25380 | 24844 | 71237 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/ ### Citation Information ``` @InProceedings{Todisco2019, Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection}, Author = {Todisco, Massimiliano and Wang, Xin and Sahidullah, Md and Delgado, H ́ector and Nautsch, Andreas and Yamagishi, Junichi and Evans, Nicholas and Kinnunen, Tomi and Lee, Kong Aik}, booktitle = {Proc. of Interspeech 2019}, Year = {2019} } ```
LanceaKing/asvspoof2019
[ "task_categories:audio-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|vctk", "language:en", "license:odc-by", "voice-anti-spoofing", "arxiv:1911.01601", "region:us" ]
2022-07-20T07:29:40+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["odc-by"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|vctk"], "task_categories": ["audio-classification"], "task_ids": [], "pretty_name": "asvspoof2019", "tags": ["voice-anti-spoofing"]}
2022-11-11T08:41:54+00:00
[ "1911.01601" ]
[ "en" ]
TAGS #task_categories-audio-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|vctk #language-English #license-odc-by #voice-anti-spoofing #arxiv-1911.01601 #region-us
Dataset Card for asvspoof2019 ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary This is a database used for the Third Automatic Speaker Verification Spoofing and Countermeasuers Challenge, for short, ASVspoof 2019 (URL) organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman, and Andreas Nautsch in 2019. ### Supported Tasks and Leaderboards ### Languages English Dataset Structure ----------------- ### Data Instances ### Data Fields Logical access (LA): * 'speaker\_id': 'LA\_', a 4-digit speaker ID * 'audio\_file\_name': name of the audio file * 'audio': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * 'system\_id': ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-') * 'key': 'bonafide' for genuine speech, or, 'spoof' for spoofing speech Physical access (PA): * 'speaker\_id': 'PA\_', a 4-digit speaker ID * 'audio\_file\_name': name of the audio file * 'audio': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * 'environment\_id': a triplet (S,R,D\_s), which take one letter in the set {a,b,c} as categorical value, defined as * 'attack\_id': a duple (D\_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as for bonafide speech, 'attack\_id' is left blank ('-') * 'key': 'bonafide' for genuine speech, or, 'spoof' for spoofing speech ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: URL
[ "### Dataset Summary\n\n\nThis is a database used for the Third Automatic Speaker Verification Spoofing\nand Countermeasuers Challenge, for short, ASVspoof 2019 (URL)\norganized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor\nDelgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,\nand Andreas Nautsch in 2019.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nLogical access (LA):\n\n\n* 'speaker\\_id': 'LA\\_', a 4-digit speaker ID\n* 'audio\\_file\\_name': name of the audio file\n* 'audio': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'system\\_id': ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-')\n* 'key': 'bonafide' for genuine speech, or, 'spoof' for spoofing speech\n\n\nPhysical access (PA):\n\n\n* 'speaker\\_id': 'PA\\_', a 4-digit speaker ID\n* 'audio\\_file\\_name': name of the audio file\n* 'audio': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'environment\\_id': a triplet (S,R,D\\_s), which take one letter in the set {a,b,c} as categorical value, defined as\n* 'attack\\_id': a duple (D\\_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as\n\n\n\nfor bonafide speech, 'attack\\_id' is left blank ('-')\n* 'key': 'bonafide' for genuine speech, or, 'spoof' for spoofing speech", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: URL" ]
[ "TAGS\n#task_categories-audio-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|vctk #language-English #license-odc-by #voice-anti-spoofing #arxiv-1911.01601 #region-us \n", "### Dataset Summary\n\n\nThis is a database used for the Third Automatic Speaker Verification Spoofing\nand Countermeasuers Challenge, for short, ASVspoof 2019 (URL)\norganized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor\nDelgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,\nand Andreas Nautsch in 2019.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nLogical access (LA):\n\n\n* 'speaker\\_id': 'LA\\_', a 4-digit speaker ID\n* 'audio\\_file\\_name': name of the audio file\n* 'audio': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'system\\_id': ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-')\n* 'key': 'bonafide' for genuine speech, or, 'spoof' for spoofing speech\n\n\nPhysical access (PA):\n\n\n* 'speaker\\_id': 'PA\\_', a 4-digit speaker ID\n* 'audio\\_file\\_name': name of the audio file\n* 'audio': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'environment\\_id': a triplet (S,R,D\\_s), which take one letter in the set {a,b,c} as categorical value, defined as\n* 'attack\\_id': a duple (D\\_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as\n\n\n\nfor bonafide speech, 'attack\\_id' is left blank ('-')\n* 'key': 'bonafide' for genuine speech, or, 'spoof' for spoofing speech", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: URL" ]
[ 99, 94, 10, 12, 6, 650, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 25 ]
[ "passage: TAGS\n#task_categories-audio-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|vctk #language-English #license-odc-by #voice-anti-spoofing #arxiv-1911.01601 #region-us \n### Dataset Summary\n\n\nThis is a database used for the Third Automatic Speaker Verification Spoofing\nand Countermeasuers Challenge, for short, ASVspoof 2019 (URL)\norganized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor\nDelgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,\nand Andreas Nautsch in 2019.### Supported Tasks and Leaderboards### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances" ]
c0197df20a67b8ad636f63e4983e36208b3ea977
tokeron/Piyyut
[ "task_categories:text-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:heb", "license:afl-3.0", "metaphor-detection", "region:us" ]
2022-07-20T08:01:23+00:00
{"language": ["heb"], "license": "afl-3.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "tags": ["metaphor-detection"], "viewer": true}
2023-04-08T09:36:57+00:00
[]
[ "heb" ]
TAGS #task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-afl-3.0 #metaphor-detection #region-us
[]
[ "TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-afl-3.0 #metaphor-detection #region-us \n" ]
[ 64 ]
[ "passage: TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Hebrew #license-afl-3.0 #metaphor-detection #region-us \n" ]
468d0b8716ec40f521f557a4617039975a3a16e4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-5a29f55d-11295506
[ "autotrain", "evaluation", "region:us" ]
2022-07-20T10:03:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion", "metrics": ["bertscore"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-20T10:04:02+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nickprock for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ 13, 102, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nickprock for evaluating this model." ]
bbb2a0157b760465002fd12a61af81b475cd387a
# Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - ** Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36) - **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June, 3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:[email protected]) ### Dataset Summary The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated for named entities following the guidelines of the [MAPA project]( https://mapa-project.eu/) which foresees two annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification. ### Supported Tasks and Leaderboards The dataset supports the task of Named Entity Recognition and Classification (NERC). ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. ### Data Fields For the annotation the documents have been split into sentences. The annotations has been done on the token level. The files contain the following data fields - `language`: language of the sentence - `type`: The document type of the sentence. Currently, only EUR-LEX is supported. - `file_name`: The document file name the sentence belongs to. - `sentence_number`: The number of the sentence inside its document. - `tokens`: The list of tokens in the sentence. - `coarse_grained`: The coarse-grained annotations for each token - `fine_grained`: The fine-grained annotations for each token As previously stated, the annotation has been conducted on a global and a more fine-grained level. The tagset used for the global and the fine-grained named entities is the following: - Address - Building - City - Country - Place - Postcode - Street - Territory - Amount - Unit - Value - Date - Year - Standard Abbreviation - Month - Day of the Week - Day - Calender Event - Person - Age - Email - Ethnic Category - Family Name - Financial - Given Name – Female - Given Name – Male - Health Insurance Number - ID Document Number - Initial Name - Marital Status - Medical Record Number - Nationality - Profession - Role - Social Security Number - Title - Url - Organisation - Time - Vehicle - Build Year - Colour - License Plate Number - Model - Type The final coarse grained tagset (in IOB notation) is the following: `['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']` The final fine grained tagset (in IOB notation) is the following: `[ 'O', 'B-BUILDING', 'I-BUILDING', 'B-CITY', 'I-CITY', 'B-COUNTRY', 'I-COUNTRY', 'B-PLACE', 'I-PLACE', 'B-TERRITORY', 'I-TERRITORY', 'I-UNIT', 'B-UNIT', 'B-VALUE', 'I-VALUE', 'B-YEAR', 'I-YEAR', 'B-STANDARD ABBREVIATION', 'I-STANDARD ABBREVIATION', 'B-MONTH', 'I-MONTH', 'B-DAY', 'I-DAY', 'B-AGE', 'I-AGE', 'B-ETHNIC CATEGORY', 'I-ETHNIC CATEGORY', 'B-FAMILY NAME', 'I-FAMILY NAME', 'B-INITIAL NAME', 'I-INITIAL NAME', 'B-MARITAL STATUS', 'I-MARITAL STATUS', 'B-PROFESSION', 'I-PROFESSION', 'B-ROLE', 'I-ROLE', 'B-NATIONALITY', 'I-NATIONALITY', 'B-TITLE', 'I-TITLE', 'B-URL', 'I-URL', 'B-TYPE', 'I-TYPE', ]` ### Data Splits Splits created by Joel Niklaus. | language | # train files | # validation files | # test files | # train sentences | # validation sentences | # test sentences | |:-----------|----------------:|---------------------:|---------------:|--------------------:|-------------------------:|-------------------:| | bg | 9 | 1 | 2 | 1411 | 166 | 560 | | cs | 9 | 1 | 2 | 1464 | 176 | 563 | | da | 9 | 1 | 2 | 1455 | 164 | 550 | | de | 9 | 1 | 2 | 1457 | 166 | 558 | | el | 9 | 1 | 2 | 1529 | 174 | 584 | | en | 9 | 1 | 2 | 893 | 98 | 408 | | es | 7 | 1 | 1 | 806 | 248 | 155 | | et | 9 | 1 | 2 | 1391 | 163 | 516 | | fi | 9 | 1 | 2 | 1398 | 187 | 531 | | fr | 9 | 1 | 2 | 1297 | 97 | 490 | | ga | 9 | 1 | 2 | 1383 | 165 | 515 | | hu | 9 | 1 | 2 | 1390 | 171 | 525 | | it | 9 | 1 | 2 | 1411 | 162 | 550 | | lt | 9 | 1 | 2 | 1413 | 173 | 548 | | lv | 9 | 1 | 2 | 1383 | 167 | 553 | | mt | 9 | 1 | 2 | 937 | 93 | 442 | | nl | 9 | 1 | 2 | 1391 | 164 | 530 | | pt | 9 | 1 | 2 | 1086 | 105 | 390 | | ro | 9 | 1 | 2 | 1480 | 175 | 557 | | sk | 9 | 1 | 2 | 1395 | 165 | 526 | | sv | 9 | 1 | 2 | 1453 | 175 | 539 | ## Dataset Creation ### Curation Rationale *„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022) ### Source Data #### Initial Data Collection and Normalization The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further information on the data collection process are given in de Gibert Bonet et al. (2022). #### Who are the source language producers? The source language producers are presumably lawyers. ### Annotations #### Annotation process *"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...) and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex, CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert Bonet et al., 2022) #### Who are the annotators? Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al. (2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected]) ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected]) ; [Github](https://github.com/kapllan)). ### Licensing Information [Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{DeGibertBonet2022, author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite}, journal = {Proceedings of the Language Resources and Evaluation Conference}, number = {June}, pages = {3751--3760}, title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}}, url = {https://aclanthology.org/2022.lrec-1.400}, year = {2022} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
joelniklaus/mapa
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:multilingual", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:ga", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pt", "language:ro", "language:sk", "language:sv", "license:cc-by-4.0", "named-entity-recognition-and-classification", "region:us" ]
2022-07-20T11:14:50+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["multilingual", "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hu", "it", "lt", "lv", "mt", "nl", "pt", "ro", "sk", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Spanish Datasets for Sensitive Entity Detection in the Legal Domain", "tags": ["named-entity-recognition-and-classification"]}
2022-10-25T15:17:09+00:00
[]
[ "multilingual", "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hu", "it", "lt", "lv", "mt", "nl", "pt", "ro", "sk", "sv" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-multilingual #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Portuguese #language-Romanian #language-Slovak #language-Swedish #license-cc-by-4.0 #named-entity-recognition-and-classification #region-us
Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain ================================================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: Spanish, Most, German, Portuguese, Slovak, Slovenian, Swedish * Paper: de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June, 3751–3760. URL * Leaderboard: * Point of Contact: Joel Niklaus ### Dataset Summary The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated for named entities following the guidelines of the MAPA project which foresees two annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification. ### Supported Tasks and Leaderboards The dataset supports the task of Named Entity Recognition and Classification (NERC). ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv Dataset Structure ----------------- ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. ### Data Fields For the annotation the documents have been split into sentences. The annotations has been done on the token level. The files contain the following data fields * 'language': language of the sentence * 'type': The document type of the sentence. Currently, only EUR-LEX is supported. * 'file\_name': The document file name the sentence belongs to. * 'sentence\_number': The number of the sentence inside its document. * 'tokens': The list of tokens in the sentence. * 'coarse\_grained': The coarse-grained annotations for each token * 'fine\_grained': The fine-grained annotations for each token As previously stated, the annotation has been conducted on a global and a more fine-grained level. The tagset used for the global and the fine-grained named entities is the following: * Address + Building + City + Country + Place + Postcode + Street + Territory * Amount + Unit + Value * Date + Year + Standard Abbreviation + Month + Day of the Week + Day + Calender Event * Person + Age + Email + Ethnic Category + Family Name + Financial + Given Name – Female + Given Name – Male + Health Insurance Number + ID Document Number + Initial Name + Marital Status + Medical Record Number + Nationality + Profession + Role + Social Security Number + Title + Url * Organisation * Time * Vehicle + Build Year + Colour + License Plate Number + Model + Type The final coarse grained tagset (in IOB notation) is the following: '['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']' The final fine grained tagset (in IOB notation) is the following: '[ 'O', 'B-BUILDING', 'I-BUILDING', 'B-CITY', 'I-CITY', 'B-COUNTRY', 'I-COUNTRY', 'B-PLACE', 'I-PLACE', 'B-TERRITORY', 'I-TERRITORY', 'I-UNIT', 'B-UNIT', 'B-VALUE', 'I-VALUE', 'B-YEAR', 'I-YEAR', 'B-STANDARD ABBREVIATION', 'I-STANDARD ABBREVIATION', 'B-MONTH', 'I-MONTH', 'B-DAY', 'I-DAY', 'B-AGE', 'I-AGE', 'B-ETHNIC CATEGORY', 'I-ETHNIC CATEGORY', 'B-FAMILY NAME', 'I-FAMILY NAME', 'B-INITIAL NAME', 'I-INITIAL NAME', 'B-MARITAL STATUS', 'I-MARITAL STATUS', 'B-PROFESSION', 'I-PROFESSION', 'B-ROLE', 'I-ROLE', 'B-NATIONALITY', 'I-NATIONALITY', 'B-TITLE', 'I-TITLE', 'B-URL', 'I-URL', 'B-TYPE', 'I-TYPE', ]' ### Data Splits Splits created by Joel Niklaus. Dataset Creation ---------------- ### Curation Rationale *„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022) ### Source Data #### Initial Data Collection and Normalization The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further information on the data collection process are given in de Gibert Bonet et al. (2022). #### Who are the source language producers? The source language producers are presumably lawyers. ### Annotations #### Annotation process *"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...) and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex, CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert Bonet et al., 2022) #### Who are the annotators? Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022). ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al. (2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. Additional Information ---------------------- ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus (Email ; Github) and Veton Matoshi (Email ; Github). ### Licensing Information Attribution 4.0 International (CC BY 4.0) ### Contributions Thanks to @JoelNiklaus and @kapllan for adding this dataset.
[ "### Dataset Summary\n\n\nThe dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court\ndecisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated\nfor named entities following the guidelines of the MAPA project which foresees two\nannotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of Named Entity Recognition and Classification (NERC).", "### Languages\n\n\nThe following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are\nnon-overlapping.", "### Data Fields\n\n\nFor the annotation the documents have been split into sentences. The annotations has been done on the token level.\nThe files contain the following data fields\n\n\n* 'language': language of the sentence\n* 'type': The document type of the sentence. Currently, only EUR-LEX is supported.\n* 'file\\_name': The document file name the sentence belongs to.\n* 'sentence\\_number': The number of the sentence inside its document.\n* 'tokens': The list of tokens in the sentence.\n* 'coarse\\_grained': The coarse-grained annotations for each token\n* 'fine\\_grained': The fine-grained annotations for each token\n\n\nAs previously stated, the annotation has been conducted on a global and a more fine-grained level.\n\n\nThe tagset used for the global and the fine-grained named entities is the following:\n\n\n* Address\n\t+ Building\n\t+ City\n\t+ Country\n\t+ Place\n\t+ Postcode\n\t+ Street\n\t+ Territory\n* Amount\n\t+ Unit\n\t+ Value\n* Date\n\t+ Year\n\t+ Standard Abbreviation\n\t+ Month\n\t+ Day of the Week\n\t+ Day\n\t+ Calender Event\n* Person\n\t+ Age\n\t+ Email\n\t+ Ethnic Category\n\t+ Family Name\n\t+ Financial\n\t+ Given Name – Female\n\t+ Given Name – Male\n\t+ Health Insurance Number\n\t+ ID Document Number\n\t+ Initial Name\n\t+ Marital Status\n\t+ Medical Record Number\n\t+ Nationality\n\t+ Profession\n\t+ Role\n\t+ Social Security Number\n\t+ Title\n\t+ Url\n* Organisation\n* Time\n* Vehicle\n\t+ Build Year\n\t+ Colour\n\t+ License Plate Number\n\t+ Model\n\t+ Type\n\n\nThe final coarse grained tagset (in IOB notation) is the following:\n\n\n'['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']'\n\n\nThe final fine grained tagset (in IOB notation) is the following:\n\n\n'[\n'O',\n'B-BUILDING',\n'I-BUILDING',\n'B-CITY',\n'I-CITY',\n'B-COUNTRY',\n'I-COUNTRY',\n'B-PLACE',\n'I-PLACE',\n'B-TERRITORY',\n'I-TERRITORY',\n'I-UNIT',\n'B-UNIT',\n'B-VALUE',\n'I-VALUE',\n'B-YEAR',\n'I-YEAR',\n'B-STANDARD ABBREVIATION',\n'I-STANDARD ABBREVIATION',\n'B-MONTH',\n'I-MONTH',\n'B-DAY',\n'I-DAY',\n'B-AGE',\n'I-AGE',\n'B-ETHNIC CATEGORY',\n'I-ETHNIC CATEGORY',\n'B-FAMILY NAME',\n'I-FAMILY NAME',\n'B-INITIAL NAME',\n'I-INITIAL NAME',\n'B-MARITAL STATUS',\n'I-MARITAL STATUS',\n'B-PROFESSION',\n'I-PROFESSION',\n'B-ROLE',\n'I-ROLE',\n'B-NATIONALITY',\n'I-NATIONALITY',\n'B-TITLE',\n'I-TITLE',\n'B-URL',\n'I-URL',\n'B-TYPE',\n'I-TYPE',\n]'", "### Data Splits\n\n\nSplits created by Joel Niklaus.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\n*„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the\npresent contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and\nevaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted\nanonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further\ninformation on the data collection process are given in de Gibert Bonet et al. (2022).", "#### Who are the source language producers?\n\n\nThe source language producers are presumably lawyers.", "### Annotations", "#### Annotation process\n\n\n*\"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme\ndescribed in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...)\nand level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex,\nCPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using\nINCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium.\"* (de Gibert\nBonet et al., 2022)", "#### Who are the annotators?\n\n\nOnly one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022).", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al.\n(2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available.\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton\nMatoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset\nconsisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the\ndataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,\ndifferences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to\nhave a look at the conversion script in order to retrace the steps for converting the\noriginal dataset into the present jsonl-format. For further information on the original dataset structure, we refer to\nthe bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation\nInformation*. Additional changes were made by Joel Niklaus (Email\n; Github) and Veton Matoshi (Email\n; Github).", "### Licensing Information\n\n\nAttribution 4.0 International (CC BY 4.0)", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this\ndataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-multilingual #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Portuguese #language-Romanian #language-Slovak #language-Swedish #license-cc-by-4.0 #named-entity-recognition-and-classification #region-us \n", "### Dataset Summary\n\n\nThe dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court\ndecisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated\nfor named entities following the guidelines of the MAPA project which foresees two\nannotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of Named Entity Recognition and Classification (NERC).", "### Languages\n\n\nThe following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are\nnon-overlapping.", "### Data Fields\n\n\nFor the annotation the documents have been split into sentences. The annotations has been done on the token level.\nThe files contain the following data fields\n\n\n* 'language': language of the sentence\n* 'type': The document type of the sentence. Currently, only EUR-LEX is supported.\n* 'file\\_name': The document file name the sentence belongs to.\n* 'sentence\\_number': The number of the sentence inside its document.\n* 'tokens': The list of tokens in the sentence.\n* 'coarse\\_grained': The coarse-grained annotations for each token\n* 'fine\\_grained': The fine-grained annotations for each token\n\n\nAs previously stated, the annotation has been conducted on a global and a more fine-grained level.\n\n\nThe tagset used for the global and the fine-grained named entities is the following:\n\n\n* Address\n\t+ Building\n\t+ City\n\t+ Country\n\t+ Place\n\t+ Postcode\n\t+ Street\n\t+ Territory\n* Amount\n\t+ Unit\n\t+ Value\n* Date\n\t+ Year\n\t+ Standard Abbreviation\n\t+ Month\n\t+ Day of the Week\n\t+ Day\n\t+ Calender Event\n* Person\n\t+ Age\n\t+ Email\n\t+ Ethnic Category\n\t+ Family Name\n\t+ Financial\n\t+ Given Name – Female\n\t+ Given Name – Male\n\t+ Health Insurance Number\n\t+ ID Document Number\n\t+ Initial Name\n\t+ Marital Status\n\t+ Medical Record Number\n\t+ Nationality\n\t+ Profession\n\t+ Role\n\t+ Social Security Number\n\t+ Title\n\t+ Url\n* Organisation\n* Time\n* Vehicle\n\t+ Build Year\n\t+ Colour\n\t+ License Plate Number\n\t+ Model\n\t+ Type\n\n\nThe final coarse grained tagset (in IOB notation) is the following:\n\n\n'['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']'\n\n\nThe final fine grained tagset (in IOB notation) is the following:\n\n\n'[\n'O',\n'B-BUILDING',\n'I-BUILDING',\n'B-CITY',\n'I-CITY',\n'B-COUNTRY',\n'I-COUNTRY',\n'B-PLACE',\n'I-PLACE',\n'B-TERRITORY',\n'I-TERRITORY',\n'I-UNIT',\n'B-UNIT',\n'B-VALUE',\n'I-VALUE',\n'B-YEAR',\n'I-YEAR',\n'B-STANDARD ABBREVIATION',\n'I-STANDARD ABBREVIATION',\n'B-MONTH',\n'I-MONTH',\n'B-DAY',\n'I-DAY',\n'B-AGE',\n'I-AGE',\n'B-ETHNIC CATEGORY',\n'I-ETHNIC CATEGORY',\n'B-FAMILY NAME',\n'I-FAMILY NAME',\n'B-INITIAL NAME',\n'I-INITIAL NAME',\n'B-MARITAL STATUS',\n'I-MARITAL STATUS',\n'B-PROFESSION',\n'I-PROFESSION',\n'B-ROLE',\n'I-ROLE',\n'B-NATIONALITY',\n'I-NATIONALITY',\n'B-TITLE',\n'I-TITLE',\n'B-URL',\n'I-URL',\n'B-TYPE',\n'I-TYPE',\n]'", "### Data Splits\n\n\nSplits created by Joel Niklaus.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\n*„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the\npresent contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and\nevaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted\nanonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further\ninformation on the data collection process are given in de Gibert Bonet et al. (2022).", "#### Who are the source language producers?\n\n\nThe source language producers are presumably lawyers.", "### Annotations", "#### Annotation process\n\n\n*\"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme\ndescribed in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...)\nand level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex,\nCPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using\nINCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium.\"* (de Gibert\nBonet et al., 2022)", "#### Who are the annotators?\n\n\nOnly one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022).", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nNote that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al.\n(2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available.\n\n\nNote that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton\nMatoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset\nconsisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the\ndataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,\ndifferences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to\nhave a look at the conversion script in order to retrace the steps for converting the\noriginal dataset into the present jsonl-format. For further information on the original dataset structure, we refer to\nthe bibliographical references and the original Github repositories and/or web pages provided in this dataset card.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe names of the original dataset curators and creators can be found in references given below, in the section *Citation\nInformation*. Additional changes were made by Joel Niklaus (Email\n; Github) and Veton Matoshi (Email\n; Github).", "### Licensing Information\n\n\nAttribution 4.0 International (CC BY 4.0)", "### Contributions\n\n\nThanks to @JoelNiklaus and @kapllan for adding this\ndataset." ]
[ 229, 112, 32, 65, 43, 841, 19, 114, 4, 52, 22, 5, 167, 39, 18, 7, 8, 260, 65, 14, 22 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-multilingual #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Portuguese #language-Romanian #language-Slovak #language-Swedish #license-cc-by-4.0 #named-entity-recognition-and-classification #region-us \n### Dataset Summary\n\n\nThe dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court\ndecisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated\nfor named entities following the guidelines of the MAPA project which foresees two\nannotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification.### Supported Tasks and Leaderboards\n\n\nThe dataset supports the task of Named Entity Recognition and Classification (NERC).### Languages\n\n\nThe following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are\nnon-overlapping.", "passage: ### Data Fields\n\n\nFor the annotation the documents have been split into sentences. The annotations has been done on the token level.\nThe files contain the following data fields\n\n\n* 'language': language of the sentence\n* 'type': The document type of the sentence. Currently, only EUR-LEX is supported.\n* 'file\\_name': The document file name the sentence belongs to.\n* 'sentence\\_number': The number of the sentence inside its document.\n* 'tokens': The list of tokens in the sentence.\n* 'coarse\\_grained': The coarse-grained annotations for each token\n* 'fine\\_grained': The fine-grained annotations for each token\n\n\nAs previously stated, the annotation has been conducted on a global and a more fine-grained level.\n\n\nThe tagset used for the global and the fine-grained named entities is the following:\n\n\n* Address\n\t+ Building\n\t+ City\n\t+ Country\n\t+ Place\n\t+ Postcode\n\t+ Street\n\t+ Territory\n* Amount\n\t+ Unit\n\t+ Value\n* Date\n\t+ Year\n\t+ Standard Abbreviation\n\t+ Month\n\t+ Day of the Week\n\t+ Day\n\t+ Calender Event\n* Person\n\t+ Age\n\t+ Email\n\t+ Ethnic Category\n\t+ Family Name\n\t+ Financial\n\t+ Given Name – Female\n\t+ Given Name – Male\n\t+ Health Insurance Number\n\t+ ID Document Number\n\t+ Initial Name\n\t+ Marital Status\n\t+ Medical Record Number\n\t+ Nationality\n\t+ Profession\n\t+ Role\n\t+ Social Security Number\n\t+ Title\n\t+ Url\n* Organisation\n* Time\n* Vehicle\n\t+ Build Year\n\t+ Colour\n\t+ License Plate Number\n\t+ Model\n\t+ Type\n\n\nThe final coarse grained tagset (in IOB notation) is the following:\n\n\n'['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']'\n\n\nThe final fine grained tagset (in IOB notation) is the following:\n\n\n'[\n'O',\n'B-BUILDING',\n'I-BUILDING',\n'B-CITY',\n'I-CITY',\n'B-COUNTRY',\n'I-COUNTRY',\n'B-PLACE',\n'I-PLACE',\n'B-TERRITORY',\n'I-TERRITORY',\n'I-UNIT',\n'B-UNIT',\n'B-VALUE',\n'I-VALUE',\n'B-YEAR',\n'I-YEAR',\n'B-STANDARD ABBREVIATION',\n'I-STANDARD ABBREVIATION',\n'B-MONTH',\n'I-MONTH',\n'B-DAY',\n'I-DAY',\n'B-AGE',\n'I-AGE',\n'B-ETHNIC CATEGORY',\n'I-ETHNIC CATEGORY',\n'B-FAMILY NAME',\n'I-FAMILY NAME',\n'B-INITIAL NAME',\n'I-INITIAL NAME',\n'B-MARITAL STATUS',\n'I-MARITAL STATUS',\n'B-PROFESSION',\n'I-PROFESSION',\n'B-ROLE',\n'I-ROLE',\n'B-NATIONALITY',\n'I-NATIONALITY',\n'B-TITLE',\n'I-TITLE',\n'B-URL',\n'I-URL',\n'B-TYPE',\n'I-TYPE',\n]'### Data Splits\n\n\nSplits created by Joel Niklaus.\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\n*„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the\npresent contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and\nevaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted\nanonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022)### Source Data#### Initial Data Collection and Normalization\n\n\nThe dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further\ninformation on the data collection process are given in de Gibert Bonet et al. (2022).#### Who are the source language producers?\n\n\nThe source language producers are presumably lawyers.### Annotations#### Annotation process\n\n\n*\"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme\ndescribed in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...)\nand level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex,\nCPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using\nINCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium.\"* (de Gibert\nBonet et al., 2022)#### Who are the annotators?\n\n\nOnly one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022).### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
e4d8ebdbd6644c78caac2655731820a7e07fd298
## advABSA An adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed [*adv*ABSA](https://arxiv.org/pdf/2207.08099.pdf) for both aspect-based sentiment classification (SC) and opinion extraction (OE). ### *adv*ABSA (Adversarial ABSA Benchmark) In response to the concerning robustness issue of ABSA, [Arts](https://aclanthology.org/2020.emnlp-main.292.pdf) is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form *adv*ABSA. That is, *adv*ABSA can be decomposed to two parts, where the first part is Arts-\[domain\]-SC reused from Arts and the second part is Arts-\[domain\]-OE newly produced by us. ### *std*ABSA (Standard ABSA Benchmark) In addition, we also provide *std*ABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-\[domain\]-SC and Sem14-\[domain\]-OE. So corresponding performance drops can be measured properly. ### Citation If you find *adv*ABSA useful, please kindly star this repositary and cite our paper as follows: ```bibtex @inproceedings{ma-etal-2022-aspect, title = "Aspect-specific Context Modeling for Aspect-based Sentiment Analysis", author = "Ma, Fang and Zhang, Chen and Zhang, Bo and Song, Dawei", booktitle = "NLPCC", month = "sep", year = "2022", address = "Guilin, China", url = "https://arxiv.org/pdf/2207.08099.pdf", } ``` ### Credits The benchmark is mainly processed by [Fang Ma](https://github.com/BD-MF).
becurrio/advABSA
[ "license:apache-2.0", "arxiv:2207.08099", "region:us" ]
2022-07-20T11:24:25+00:00
{"license": "apache-2.0"}
2022-07-21T04:57:48+00:00
[ "2207.08099" ]
[]
TAGS #license-apache-2.0 #arxiv-2207.08099 #region-us
## advABSA An adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed *adv*ABSA for both aspect-based sentiment classification (SC) and opinion extraction (OE). ### *adv*ABSA (Adversarial ABSA Benchmark) In response to the concerning robustness issue of ABSA, Arts is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form *adv*ABSA. That is, *adv*ABSA can be decomposed to two parts, where the first part is Arts-\[domain\]-SC reused from Arts and the second part is Arts-\[domain\]-OE newly produced by us. ### *std*ABSA (Standard ABSA Benchmark) In addition, we also provide *std*ABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-\[domain\]-SC and Sem14-\[domain\]-OE. So corresponding performance drops can be measured properly. If you find *adv*ABSA useful, please kindly star this repositary and cite our paper as follows: ### Credits The benchmark is mainly processed by Fang Ma.
[ "## advABSA\n\nAn adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed *adv*ABSA for both aspect-based sentiment classification (SC) and opinion extraction (OE).", "### *adv*ABSA (Adversarial ABSA Benchmark)\n\nIn response to the concerning robustness issue of ABSA, Arts is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form *adv*ABSA. That is, *adv*ABSA can be decomposed to two parts, where the first part is Arts-\\[domain\\]-SC reused from Arts and the second part is Arts-\\[domain\\]-OE newly produced by us.", "### *std*ABSA (Standard ABSA Benchmark)\n\nIn addition, we also provide *std*ABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-\\[domain\\]-SC and Sem14-\\[domain\\]-OE. So corresponding performance drops can be measured properly.\n\nIf you find *adv*ABSA useful, please kindly star this repositary and cite our paper as follows:", "### Credits\n\nThe benchmark is mainly processed by Fang Ma." ]
[ "TAGS\n#license-apache-2.0 #arxiv-2207.08099 #region-us \n", "## advABSA\n\nAn adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed *adv*ABSA for both aspect-based sentiment classification (SC) and opinion extraction (OE).", "### *adv*ABSA (Adversarial ABSA Benchmark)\n\nIn response to the concerning robustness issue of ABSA, Arts is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form *adv*ABSA. That is, *adv*ABSA can be decomposed to two parts, where the first part is Arts-\\[domain\\]-SC reused from Arts and the second part is Arts-\\[domain\\]-OE newly produced by us.", "### *std*ABSA (Standard ABSA Benchmark)\n\nIn addition, we also provide *std*ABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-\\[domain\\]-SC and Sem14-\\[domain\\]-OE. So corresponding performance drops can be measured properly.\n\nIf you find *adv*ABSA useful, please kindly star this repositary and cite our paper as follows:", "### Credits\n\nThe benchmark is mainly processed by Fang Ma." ]
[ 23, 46, 156, 108, 16 ]
[ "passage: TAGS\n#license-apache-2.0 #arxiv-2207.08099 #region-us \n## advABSA\n\nAn adversarial aspect-based sentiment analysis (ABSA) benchmark, dubbed *adv*ABSA for both aspect-based sentiment classification (SC) and opinion extraction (OE).### *adv*ABSA (Adversarial ABSA Benchmark)\n\nIn response to the concerning robustness issue of ABSA, Arts is proposed, which contains datasets crafted only for adversarial evaluaiton on SC but not for OE. So we additionally craft datasets for adversarial evaluaiton on OE following their track. These gathered datasets form *adv*ABSA. That is, *adv*ABSA can be decomposed to two parts, where the first part is Arts-\\[domain\\]-SC reused from Arts and the second part is Arts-\\[domain\\]-OE newly produced by us.### *std*ABSA (Standard ABSA Benchmark)\n\nIn addition, we also provide *std*ABSA containing datasets from SemEval14 for standard evaluation, namely Sem14-\\[domain\\]-SC and Sem14-\\[domain\\]-OE. So corresponding performance drops can be measured properly.\n\nIf you find *adv*ABSA useful, please kindly star this repositary and cite our paper as follows:### Credits\n\nThe benchmark is mainly processed by Fang Ma." ]
88226971c2c3968d9bcef3eea281995c0313f108
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: tuner007/pegasus_summarizer * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8015d52c-11325509
[ "autotrain", "evaluation", "region:us" ]
2022-07-20T15:03:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "tuner007/pegasus_summarizer", "metrics": ["accuracy"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-20T16:31:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: tuner007/pegasus_summarizer * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Neez for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: tuner007/pegasus_summarizer\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Neez for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: tuner007/pegasus_summarizer\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Neez for evaluating this model." ]
[ 13, 86, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: tuner007/pegasus_summarizer\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Neez for evaluating this model." ]
9862d1e870fe6dba4922d3d326c9c8b90a2ecad5
# Dataset Card for "relbert/lexical_relation_classification" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://aclanthology.org/P19-1169/](https://aclanthology.org/P19-1169/) - **Dataset:** Lexical Relation Classification ### Dataset Summary Five different datasets (`BLESS`, `CogALexV`, `EVALution`, `K&H+N`, `ROOT09`) for lexical relation classification used in [SphereRE](https://www.aclweb.org/anthology/P19-1169/). ### Dataset Summary This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/). | name | train | validation | test | |---------------|------:|-------:|-----:| | `BLESS` | 18582 | 1327 | 6637 | | `CogALexV` | 3054 | - | 4260 | | `EVALution` | 5160 | 372 | 1846 | | `K&H+N` | 40256 | 2876 | 14377 | | `ROOT09` | 8933 | 638 | 3191 | ## Dataset Structure ### Data Instances An example looks as follows. ``` {"head": "turtle", "tail": "live", "relation": "event"} ``` The `stem` and `tail` are the word pair and `relation` is the corresponding relation label. ### Citation Information ``` @inproceedings{wang-etal-2019-spherere, title = "{S}phere{RE}: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings", author = "Wang, Chengyu and He, Xiaofeng and Zhou, Aoying", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1169", doi = "10.18653/v1/P19-1169", pages = "1727--1737", abstract = "Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.", } ``` ### LICENSE The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
relbert/lexical_relation_classification
[ "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:other", "region:us" ]
2022-07-20T21:45:48+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "Lexical Relation Classification"}
2022-07-20T22:24:17+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us
Dataset Card for "relbert/lexical\_relation\_classification" ============================================================ Dataset Description ------------------- * Repository: RelBERT * Paper: URL * Dataset: Lexical Relation Classification ### Dataset Summary Five different datasets ('BLESS', 'CogALexV', 'EVALution', 'K&H+N', 'ROOT09') for lexical relation classification used in SphereRE. ### Dataset Summary This dataset contains 5 different word analogy questions used in Analogy Language Model. Dataset Structure ----------------- ### Data Instances An example looks as follows. The 'stem' and 'tail' are the word pair and 'relation' is the corresponding relation label. ### LICENSE The LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
[ "### Dataset Summary\n\n\nFive different datasets ('BLESS', 'CogALexV', 'EVALution', 'K&H+N', 'ROOT09') for lexical relation classification used in SphereRE.", "### Dataset Summary\n\n\nThis dataset contains 5 different word analogy questions used in Analogy Language Model.\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks as follows.\n\n\nThe 'stem' and 'tail' are the word pair and 'relation' is the corresponding relation label.", "### LICENSE\n\n\nThe LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use." ]
[ "TAGS\n#multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nFive different datasets ('BLESS', 'CogALexV', 'EVALution', 'K&H+N', 'ROOT09') for lexical relation classification used in SphereRE.", "### Dataset Summary\n\n\nThis dataset contains 5 different word analogy questions used in Analogy Language Model.\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks as follows.\n\n\nThe 'stem' and 'tail' are the word pair and 'relation' is the corresponding relation label.", "### LICENSE\n\n\nThe LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use." ]
[ 33, 57, 32, 37, 45 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us \n### Dataset Summary\n\n\nFive different datasets ('BLESS', 'CogALexV', 'EVALution', 'K&H+N', 'ROOT09') for lexical relation classification used in SphereRE.### Dataset Summary\n\n\nThis dataset contains 5 different word analogy questions used in Analogy Language Model.\n\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example looks as follows.\n\n\nThe 'stem' and 'tail' are the word pair and 'relation' is the corresponding relation label.### LICENSE\n\n\nThe LICENSE of all the resources are under CC-BY-NC-4.0. Thus, they are freely available for academic purpose or individual research, but restricted for commercial use." ]
517e8e60404a2e2961bf28e0fd3631cd8424e81d
# Dataset Card for "relbert/relation_mapping" ## Dataset Description - **Repository:** [RelBERT](https://github.com/asahi417/relbert) - **Paper:** [https://www.jair.org/index.php/jair/article/view/10583](https://www.jair.org/index.php/jair/article/view/10583) - **Dataset:** Relation Mapping ### Dataset Summary Relation Mapping is a task to choose optimal combination of word pairs (see more detail in the [paper](https://www.jair.org/index.php/jair/article/view/10583)). Relation mapping `M` is the set of bijective map in between two sets of terms (`A` and `B`): ``` [set `A`]: ("solar system", "sun", "planet", "mass", "attracts", "revolves", "gravity") [set `B`]: ("atom", "nucleus", "electron", "charge", "attracts", "revolves", "electromagnetism") [Relation Mapping `M`] * "solar system" -> "atom" * "sun" -> "nucleus" * "planet" -> "electron" * "mass" -> "charge" * "attracts" -> "attracts" * "revolves" -> "revolves" * "gravity" -> "electromagnetism" ``` ***[Relation Mapping Problem](https://www.jair.org/index.php/jair/article/view/10583)*** is the task to identify the mapping `M` given the sets of terms `A` and `B`. ## Dataset Structure ### Data Instances An example looks as follows. ``` { "id": "m10", "reference": ["seeing", "understanding"], "source": ["seeing", "light", "illuminating", "darkness", "view", "hidden"], "target": ["understanding", "knowledge", "explaining", "confusion", "interpretation", "secret"], "agreement": [68.2, 77.3, 86.4, 86.4, 68.2, 86.4], "pos": ["vbg", "nn", "vbg", "nn", "nn", "jj"], "target_random": ["knowledge", "interpretation", "explaining", "confusion", "understanding", "secret"] } ``` - `source`: A list of terms, which is the source of the relation mapping from. - `target_random`: A list of terms, where we want to find a mapping from `source` to. - `target`: A correctly ordered `target_random` that aligns with the `source`. Given `source` and `target_random`, the task is to predict the correct order of `target_random` so that it matches `target`. In average 7 terms are in the set, so the total number of possible order is 5040. ### Data Splits | name |test| |---------|----:| |relation_mapping| 20 | ### Citation Information ``` @article{turney2008latent, title={The latent relation mapping engine: Algorithm and experiments}, author={Turney, Peter D}, journal={Journal of Artificial Intelligence Research}, volume={33}, pages={615--655}, year={2008} } ```
relbert/relation_mapping
[ "multilinguality:monolingual", "size_categories:1<n<1K", "language:en", "license:other", "region:us" ]
2022-07-20T21:46:33+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1<n<1K"], "pretty_name": "Relation Mapping"}
2022-08-11T09:51:58+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-1<n<1K #language-English #license-other #region-us
Dataset Card for "relbert/relation\_mapping" ============================================ Dataset Description ------------------- * Repository: RelBERT * Paper: URL * Dataset: Relation Mapping ### Dataset Summary Relation Mapping is a task to choose optimal combination of word pairs (see more detail in the paper). Relation mapping 'M' is the set of bijective map in between two sets of terms ('A' and 'B'): *Relation Mapping Problem* is the task to identify the mapping 'M' given the sets of terms 'A' and 'B'. Dataset Structure ----------------- ### Data Instances An example looks as follows. * 'source': A list of terms, which is the source of the relation mapping from. * 'target\_random': A list of terms, where we want to find a mapping from 'source' to. * 'target': A correctly ordered 'target\_random' that aligns with the 'source'. Given 'source' and 'target\_random', the task is to predict the correct order of 'target\_random' so that it matches 'target'. In average 7 terms are in the set, so the total number of possible order is 5040. ### Data Splits
[ "### Dataset Summary\n\n\nRelation Mapping is a task to choose optimal combination of word pairs (see more detail in the paper).\n\n\nRelation mapping 'M' is the set of bijective map in between two sets of terms ('A' and 'B'):\n\n\n*Relation Mapping Problem* is the task to identify the mapping 'M' given the sets of terms 'A' and 'B'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks as follows.\n\n\n* 'source': A list of terms, which is the source of the relation mapping from.\n* 'target\\_random': A list of terms, where we want to find a mapping from 'source' to.\n* 'target': A correctly ordered 'target\\_random' that aligns with the 'source'.\n\n\nGiven 'source' and 'target\\_random', the task is to predict the correct order of 'target\\_random' so that it matches 'target'.\nIn average 7 terms are in the set, so the total number of possible order is 5040.", "### Data Splits" ]
[ "TAGS\n#multilinguality-monolingual #size_categories-1<n<1K #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nRelation Mapping is a task to choose optimal combination of word pairs (see more detail in the paper).\n\n\nRelation mapping 'M' is the set of bijective map in between two sets of terms ('A' and 'B'):\n\n\n*Relation Mapping Problem* is the task to identify the mapping 'M' given the sets of terms 'A' and 'B'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks as follows.\n\n\n* 'source': A list of terms, which is the source of the relation mapping from.\n* 'target\\_random': A list of terms, where we want to find a mapping from 'source' to.\n* 'target': A correctly ordered 'target\\_random' that aligns with the 'source'.\n\n\nGiven 'source' and 'target\\_random', the task is to predict the correct order of 'target\\_random' so that it matches 'target'.\nIn average 7 terms are in the set, so the total number of possible order is 5040.", "### Data Splits" ]
[ 34, 96, 154, 5 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-1<n<1K #language-English #license-other #region-us \n### Dataset Summary\n\n\nRelation Mapping is a task to choose optimal combination of word pairs (see more detail in the paper).\n\n\nRelation mapping 'M' is the set of bijective map in between two sets of terms ('A' and 'B'):\n\n\n*Relation Mapping Problem* is the task to identify the mapping 'M' given the sets of terms 'A' and 'B'.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example looks as follows.\n\n\n* 'source': A list of terms, which is the source of the relation mapping from.\n* 'target\\_random': A list of terms, where we want to find a mapping from 'source' to.\n* 'target': A correctly ordered 'target\\_random' that aligns with the 'source'.\n\n\nGiven 'source' and 'target\\_random', the task is to predict the correct order of 'target\\_random' so that it matches 'target'.\nIn average 7 terms are in the set, so the total number of possible order is 5040.### Data Splits" ]
10c6f27014e29ecee20aaa336dc25412c0fedf81
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8bc70ef8-11355511
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T04:48:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-07-22T05:44:01+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ 13, 95, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
ff221b56ac6468869eb8b0630a01921263aae6e3
## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Installation](#installation) - [Install requirements](#install-requirements) - [Download settings](#download-settings) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kietzmannlab.org/ecoset](https://www.kietzmannlab.org/ecoset/) - **Repository:** [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/6266601/tree/v1) - **Paper:** [https://www.pnas.org/doi/full/10.1073/pnas.2011417118](https://doi.org/10.1073/pnas.2011417118) - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images from 565 basic level categories, chosen to be both (i) frequent in linguistic usage, and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’ is not). Ecoset is a typical image recognition dataset, combining images of objects with appropriate labels (one label per image). Importantly, ecoset is intended to provide higher ecological validity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content. For more information on the dataset, consider reading the [original publication](https://doi.org/10.1073/pnas.2011417118). Ecoset consists of a train, test, and validation subset which all are openly available to the user. ### Supported Tasks and Leaderboards Ecoset is a large multi-class single-label object recognition image dataset (similar to ImageNet). ## Installation ### Install Requirements In order to work with ecoset, please make sure to install huggingface datasets: ```bash pip install datasets ``` If you want to work with the dataset in `Huggingface.datasets`, you might also want to make sure to install PIL (`pip install Pillow`) in order to work with image input. However, downloading the dataset will work despite not having installed PIL. ### Download Settings Please set `verification_mode=no_checks`. when downloading this dataset, else the download will result in an error, additionally you may need to install defusedxml via pip to avoid Permission Errors required by _generate_examples method: ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks) ``` optionally a cache_dir can be specified where the zip file will be downloaded and extracted ```python from datasets import load_dataset dataset = load_dataset("kietzmannlab/ecoset", verification_mode=no_checks, cache_dir='/path/to/dir') ``` | NOTE: If you get errors like: `FileNotFoundError: [Errno 2] No such file or directory:'<DATASET_PATH>'` this is likely due do having previously downloaded the dataset and then cancelling the download. If this is the case for you, you can fix this error by manually removing the dataset path and reinstalling the dataset. | | --- | ## Dataset Structure We show detailed information for all the configurations of the dataset. Currently, there is only one setting (`Full`) available, containing all data. ### Data Instances #### Full - **Size of downloaded dataset files:** 155 GB - **Total amount of disk used:** 311 GB ## Dataset Creation A total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%. ### Curation Rationale More information on the curation of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Source Data The source data is available under: [https://codeocean.com/capsule/9570390/tree/v1](https://codeocean.com/capsule/6266601/tree/v1) ### Annotations Each ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset. ### Personal and Sensitive Information The dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset Large-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the [original publication](https://doi.org/10.1073/pnas.2011417118). ### Discussion of Biases Despite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.). ### Other Known Limitations In addition to points mentioned in [Discussion of Biases](#discussion-of-biases), ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults. ## Additional Information ### Dataset Curators The corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann. ### Licensing Information Ecoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0). ### Citation Information ``` @article{mehrer2021ecologically, title={An ecologically motivated image dataset for deep learning yields better models of human vision}, author={Mehrer, Johannes and Spoerer, Courtney J and Jones, Emer C and Kriegeskorte, Nikolaus and Kietzmann, Tim C}, journal={Proceedings of the National Academy of Sciences}, volume={118}, number={8}, pages={e2011417118}, year={2021}, publisher={National Acad Sciences} } ``` ### Contributions The ecoset dataloader and dataset card was created by [@DiGyt](https://github.com/DiGyt) on behalf of [@kietzmannlab](https://huggingface.co/kietzmannlab). For questions and suggestions feel free to reach out.
kietzmannlab/ecoset
[ "task_categories:image-classification", "task_ids:multi-class-classification", "task_ids:multi-class-image-classification", "source_datasets:original", "license:cc", "other-image-classification", "image-classification", "region:us" ]
2022-07-21T06:33:50+00:00
{"license": "cc", "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification", "multi-class-image-classification"], "paperswithcode_id": "ecoset", "pretty_name": "Ecoset", "tags": ["other-image-classification", "image-classification"]}
2024-02-02T19:13:47+00:00
[]
[]
TAGS #task_categories-image-classification #task_ids-multi-class-classification #task_ids-multi-class-image-classification #source_datasets-original #license-cc #other-image-classification #image-classification #region-us
Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards * Installation + Install requirements + Download settings * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Point of Contact: tim.kietzmann@URL ### Dataset Summary Tired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images from 565 basic level categories, chosen to be both (i) frequent in linguistic usage, and (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’ is not). Ecoset is a typical image recognition dataset, combining images of objects with appropriate labels (one label per image). Importantly, ecoset is intended to provide higher ecological validity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content. For more information on the dataset, consider reading the original publication. Ecoset consists of a train, test, and validation subset which all are openly available to the user. ### Supported Tasks and Leaderboards Ecoset is a large multi-class single-label object recognition image dataset (similar to ImageNet). Installation ------------ ### Install Requirements In order to work with ecoset, please make sure to install huggingface datasets: If you want to work with the dataset in 'Huggingface.datasets', you might also want to make sure to install PIL ('pip install Pillow') in order to work with image input. However, downloading the dataset will work despite not having installed PIL. ### Download Settings Please set 'verification\_mode=no\_checks'. when downloading this dataset, else the download will result in an error, additionally you may need to install defusedxml via pip to avoid Permission Errors required by \_generate\_examples method: optionally a cache\_dir can be specified where the zip file will be downloaded and extracted Dataset Structure ----------------- We show detailed information for all the configurations of the dataset. Currently, there is only one setting ('Full') available, containing all data. ### Data Instances #### Full * Size of downloaded dataset files: 155 GB * Total amount of disk used: 311 GB Dataset Creation ---------------- A total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX\_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%. ### Curation Rationale More information on the curation of the dataset can be found in the original publication. ### Source Data The source data is available under: URL ### Annotations Each ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset. ### Personal and Sensitive Information The dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Large-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the original publication. ### Discussion of Biases Despite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.). ### Other Known Limitations In addition to points mentioned in Discussion of Biases, ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults. Additional Information ---------------------- ### Dataset Curators The corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann. ### Licensing Information Ecoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0). ### Contributions The ecoset dataloader and dataset card was created by @DiGyt on behalf of @kietzmannlab. For questions and suggestions feel free to reach out.
[ "### Dataset Summary\n\n\nTired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images\nfrom 565 basic level categories, chosen to be both (i) frequent in linguistic usage,\nand (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’\nis not).\n\n\nEcoset is a typical image recognition dataset, combining images of objects with appropriate\nlabels (one label per image). Importantly, ecoset is intended to provide higher ecological\nvalidity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content.\nFor more information on the dataset, consider reading the original publication.\n\n\nEcoset consists of a train, test, and validation subset which all are openly available to the user.", "### Supported Tasks and Leaderboards\n\n\nEcoset is a large multi-class single-label object recognition image dataset (similar to ImageNet).\n\n\nInstallation\n------------", "### Install Requirements\n\n\nIn order to work with ecoset, please make sure to install huggingface datasets:\n\n\nIf you want to work with the dataset in 'Huggingface.datasets', you might also want to make sure to install PIL ('pip install Pillow') in order to work with image input. However, downloading the dataset will work despite not having installed PIL.", "### Download Settings\n\n\nPlease set 'verification\\_mode=no\\_checks'. when downloading this dataset, else the download will result in an error, additionally you may need to\ninstall defusedxml via pip to avoid Permission Errors required by \\_generate\\_examples method:\n\n\noptionally a cache\\_dir can be specified where the zip file will be downloaded and extracted\n\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all the configurations of the dataset. Currently, there is only one setting ('Full') available, containing all data.", "### Data Instances", "#### Full\n\n\n* Size of downloaded dataset files: 155 GB\n* Total amount of disk used: 311 GB\n\n\nDataset Creation\n----------------\n\n\nA total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX\\_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%.", "### Curation Rationale\n\n\nMore information on the curation of the dataset can be found in the original publication.", "### Source Data\n\n\nThe source data is available under: URL", "### Annotations\n\n\nEach ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset.", "### Personal and Sensitive Information\n\n\nThe dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nLarge-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the original publication.", "### Discussion of Biases\n\n\nDespite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.).", "### Other Known Limitations\n\n\nIn addition to points mentioned in Discussion of Biases, ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann.", "### Licensing Information\n\n\nEcoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0).", "### Contributions\n\n\nThe ecoset dataloader and dataset card was created by @DiGyt on behalf of @kietzmannlab.\nFor questions and suggestions feel free to reach out." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #task_ids-multi-class-image-classification #source_datasets-original #license-cc #other-image-classification #image-classification #region-us \n", "### Dataset Summary\n\n\nTired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images\nfrom 565 basic level categories, chosen to be both (i) frequent in linguistic usage,\nand (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’\nis not).\n\n\nEcoset is a typical image recognition dataset, combining images of objects with appropriate\nlabels (one label per image). Importantly, ecoset is intended to provide higher ecological\nvalidity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content.\nFor more information on the dataset, consider reading the original publication.\n\n\nEcoset consists of a train, test, and validation subset which all are openly available to the user.", "### Supported Tasks and Leaderboards\n\n\nEcoset is a large multi-class single-label object recognition image dataset (similar to ImageNet).\n\n\nInstallation\n------------", "### Install Requirements\n\n\nIn order to work with ecoset, please make sure to install huggingface datasets:\n\n\nIf you want to work with the dataset in 'Huggingface.datasets', you might also want to make sure to install PIL ('pip install Pillow') in order to work with image input. However, downloading the dataset will work despite not having installed PIL.", "### Download Settings\n\n\nPlease set 'verification\\_mode=no\\_checks'. when downloading this dataset, else the download will result in an error, additionally you may need to\ninstall defusedxml via pip to avoid Permission Errors required by \\_generate\\_examples method:\n\n\noptionally a cache\\_dir can be specified where the zip file will be downloaded and extracted\n\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all the configurations of the dataset. Currently, there is only one setting ('Full') available, containing all data.", "### Data Instances", "#### Full\n\n\n* Size of downloaded dataset files: 155 GB\n* Total amount of disk used: 311 GB\n\n\nDataset Creation\n----------------\n\n\nA total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX\\_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%.", "### Curation Rationale\n\n\nMore information on the curation of the dataset can be found in the original publication.", "### Source Data\n\n\nThe source data is available under: URL", "### Annotations\n\n\nEach ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset.", "### Personal and Sensitive Information\n\n\nThe dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nLarge-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the original publication.", "### Discussion of Biases\n\n\nDespite best efforts to provide an ecologically valid and overall less biased dataset, ecoset is still likely to contain biased data. The category selection of ecoset was based on human concreteness ratings and word frequencies in a corpus consisting of American television and film subtitles. This undoubtedly biases the category selection toward Western cultures. Image inclusion was based on the availability via Bing/Flickr search results as well as the existence of relevant ImageNet categories. Images depicting people, specifically the categories “man,” “woman,” and “child,” were not sampled according to census distributions (age, ethnicity, gender, etc.).", "### Other Known Limitations\n\n\nIn addition to points mentioned in Discussion of Biases, ecoset image and category distributions do not reflect the naturalistic, egocentric visual input typically encountered in the everyday life of infant and adults.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe corpus was put together by Johannes Mehrer, Courtney J. Spoerer, Emer C. Jones, Nikolaus Kriegeskorte, and Tim C. Kietzmann.", "### Licensing Information\n\n\nEcoset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license (cc-by-nc-sa-2.0).", "### Contributions\n\n\nThe ecoset dataloader and dataset card was created by @DiGyt on behalf of @kietzmannlab.\nFor questions and suggestions feel free to reach out." ]
[ 68, 188, 36, 93, 134, 6, 159, 25, 12, 44, 111, 110, 165, 58, 44, 35, 41 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #task_ids-multi-class-image-classification #source_datasets-original #license-cc #other-image-classification #image-classification #region-us \n### Dataset Summary\n\n\nTired of all the dogs in ImageNet (ILSVRC)? Then ecoset is here for you. 1.5m images\nfrom 565 basic level categories, chosen to be both (i) frequent in linguistic usage,\nand (ii) rated by human observers as concrete (e.g. ‘table’ is concrete, ‘romance’\nis not).\n\n\nEcoset is a typical image recognition dataset, combining images of objects with appropriate\nlabels (one label per image). Importantly, ecoset is intended to provide higher ecological\nvalidity than its counterparts, with a mislabelling error rate < 5% and filtered for NSFW content.\nFor more information on the dataset, consider reading the original publication.\n\n\nEcoset consists of a train, test, and validation subset which all are openly available to the user.### Supported Tasks and Leaderboards\n\n\nEcoset is a large multi-class single-label object recognition image dataset (similar to ImageNet).\n\n\nInstallation\n------------### Install Requirements\n\n\nIn order to work with ecoset, please make sure to install huggingface datasets:\n\n\nIf you want to work with the dataset in 'Huggingface.datasets', you might also want to make sure to install PIL ('pip install Pillow') in order to work with image input. However, downloading the dataset will work despite not having installed PIL.", "passage: ### Download Settings\n\n\nPlease set 'verification\\_mode=no\\_checks'. when downloading this dataset, else the download will result in an error, additionally you may need to\ninstall defusedxml via pip to avoid Permission Errors required by \\_generate\\_examples method:\n\n\noptionally a cache\\_dir can be specified where the zip file will be downloaded and extracted\n\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for all the configurations of the dataset. Currently, there is only one setting ('Full') available, containing all data.### Data Instances#### Full\n\n\n* Size of downloaded dataset files: 155 GB\n* Total amount of disk used: 311 GB\n\n\nDataset Creation\n----------------\n\n\nA total of 565 categories were selected based on the following: 1) their word frequency in American television and film subtitles (SUBTLEX\\_US), 2) the perceived concreteness by human observers, and 3) the availability of a minimum of 700 images. Images were sourced via the overall ImageNet database (the same resource used for ILSVRC 2012) or obtained under CC BY-NC-SA 2.0 license from Bing image search and Flickr. Thorough data cleaning procedures were put in place to remove duplicates and to assure an expected misclassification rate per category of <4%.### Curation Rationale\n\n\nMore information on the curation of the dataset can be found in the original publication.### Source Data\n\n\nThe source data is available under: URL### Annotations\n\n\nEach ecoset image folder is annotated with class labels according to the main object depicted in a class of images. No further annotations are added to the dataset.### Personal and Sensitive Information\n\n\nThe dataset was tested to exclude sensitive images using Yahoo's Open NSFW detection model, removing all image with an NSFW score above 0.8. For this dataset, only images with secured license information was used, which should prevent the inclusion of images without consent of the image's authors and subjects. Despite these measures, it is possible that the images in the dataset contain personal and sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nLarge-scale image-label datasets such as ImageNet are the backbone of modern Computer Vision. However, such large datasets often suffer from problems like mislabeling, category biases, misrepresentations, and unsafe content. Ecoset was created with the aim to reduce these biases and consequently improve the social impact of Computer Vision techniques trained on the dataset. More information on the social impact of the dataset can be found in the original publication." ]
9d7c3583cb446ef2e26c6fca24324e7dd295e238
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cestwc/cnn_dailymail-test50 * Config: cestwc--cnn_dailymail-test50 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Buckeyes2019](https://huggingface.co/Buckeyes2019) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cestwc__cnn_dailymail-test50-b9fb5faf-11395515
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T08:56:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cestwc/cnn_dailymail-test50"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "cestwc/cnn_dailymail-test50", "dataset_config": "cestwc--cnn_dailymail-test50", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-21T08:57:46+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cestwc/cnn_dailymail-test50 * Config: cestwc--cnn_dailymail-test50 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Buckeyes2019 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cestwc/cnn_dailymail-test50\n* Config: cestwc--cnn_dailymail-test50\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Buckeyes2019 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cestwc/cnn_dailymail-test50\n* Config: cestwc--cnn_dailymail-test50\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Buckeyes2019 for evaluating this model." ]
[ 13, 106, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cestwc/cnn_dailymail-test50\n* Config: cestwc--cnn_dailymail-test50\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Buckeyes2019 for evaluating this model." ]
035943f67ab75602dc39ab84e279f27f10e80e1e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: cestwc/cnn_dailymail-test50 * Config: cestwc--cnn_dailymail-test50 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Buckeyes2019](https://huggingface.co/Buckeyes2019) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cestwc__cnn_dailymail-test50-b9fb5faf-11395514
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T08:56:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cestwc/cnn_dailymail-test50"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": [], "dataset_name": "cestwc/cnn_dailymail-test50", "dataset_config": "cestwc--cnn_dailymail-test50", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-21T08:58:16+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: cestwc/cnn_dailymail-test50 * Config: cestwc--cnn_dailymail-test50 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Buckeyes2019 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: cestwc/cnn_dailymail-test50\n* Config: cestwc--cnn_dailymail-test50\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Buckeyes2019 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: cestwc/cnn_dailymail-test50\n* Config: cestwc--cnn_dailymail-test50\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Buckeyes2019 for evaluating this model." ]
[ 13, 108, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: cestwc/cnn_dailymail-test50\n* Config: cestwc--cnn_dailymail-test50\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Buckeyes2019 for evaluating this model." ]
0f685a035621e4a9c17aa71437e1d6325144d5d4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-banking77-10fe815c-11415521
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T11:41:00+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "nickprock/distilbert-base-uncased-banking77-classification", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-21T11:41:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nickprock for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ 13, 99, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nickprock for evaluating this model." ]
e83125a08d57be6c9e0aa40ad7f06ecb1d77adc5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-banking77-34727576-11425522
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T11:41:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "nickprock/distilbert-base-uncased-banking77-classification", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-21T11:41:53+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nickprock for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ 13, 99, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nickprock for evaluating this model." ]
1f3971387a63eab5ed76d795c501249904f2161b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nickprock](https://huggingface.co/nickprock) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-banking77-9cb960fa-11435523
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T11:41:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["banking77"], "eval_info": {"task": "multi_class_classification", "model": "nickprock/distilbert-base-uncased-banking77-classification", "metrics": [], "dataset_name": "banking77", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-07-21T11:41:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: nickprock/distilbert-base-uncased-banking77-classification * Dataset: banking77 * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nickprock for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nickprock for evaluating this model." ]
[ 13, 99, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: nickprock/distilbert-base-uncased-banking77-classification\n* Dataset: banking77\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nickprock for evaluating this model." ]
2ba19f47e9b5a645c1c2e9232c8abd69f91ec8df
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@jmsteen](https://huggingface.co/jmsteen) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-82ea4996-11445524
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T13:22:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-22T13:59:19+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @jmsteen for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @jmsteen for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @jmsteen for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @jmsteen for evaluating this model." ]
f39a0f32e1e09f34099c4b0ed22b35935e537cbc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-976d13e6-0b05-475e-9b4e-e8fbc174cfae-346
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T14:35:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T14:37:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
e66c0d2ce2bde245f0a64d8eea309b2f27e26c80
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d3ec9b9a-b64a-40a0-baff-3af478f604df-367
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T14:44:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T14:50:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 96, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
0a02e8200fb7a51296112bade2ab912df6f09361
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f2158b57-4f5f-457d-9656-edbe0fb0d311-398
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T14:58:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:01:11+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 95, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
127f37dff7cde0aad160e7e0343214ae6114046e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-e81e3618-f3e1-472b-97e0-2794cda0adb2-409
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:06:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:09:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 95, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
37906d94ced6a00549b67d7e5d5bd8b295042f5d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-df92c53c-2bfd-442d-8572-7541578e7feb-4110
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:19:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:23:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 95, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
738a202f3044f0e5191aeee1061701c61f15e6cb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-9ec0b53a-81c5-4d01-88f6-bf53413cd1a8-4611
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:32:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T15:34:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
6d679cc141274969e47290ea5e6e6b3f25016591
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-9ec0b53a-81c5-4d01-88f6-bf53413cd1a8-4612
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T15:37:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T16:25:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 97, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
1c37d22eef2e4e729d8908c098b0362848f42c51
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-7c1a5e5f-11505530
[ "autotrain", "evaluation", "region:us" ]
2022-07-21T16:43:24+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-21T16:47:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 95, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
42ab35c272ec2a3248521e36ffffed0115dab581
# Dataset Card for Auditor Sentiment ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) ## Dataset Description Auditor review sentiment collected by News Department - **Point of Contact:** Talked to COE for Auditing, currently [email protected] ### Dataset Summary Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ``` "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .", "label": "negative" ``` ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0) ### Data Splits A train/test split was created randomly with a 75/25 split ## Dataset Creation ### Curation Rationale To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance. ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news reports. #### Who are the source language producers? The source data was written by various auditors. ### Annotations #### Annotation process This release of the auditor reviews covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%. #### Who are the annotators? They were pulled from the SME list, names are held by [email protected] ### Personal and Sensitive Information There is no personal or sensitive information in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. ### Licensing Information License: Demo.Org Proprietary - DO NOT SHARE This dataset is based on the [financial phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset.
FinanceInc/auditor_sentiment
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "region:us" ]
2022-07-21T17:25:47+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "Auditor_Sentiment"}
2022-07-21T18:03:51+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us
# Dataset Card for Auditor Sentiment ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information ## Dataset Description Auditor review sentiment collected by News Department - Point of Contact: Talked to COE for Auditing, currently sue@URL ### Dataset Summary Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment. ### Supported Tasks and Leaderboards Sentiment Classification ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - sentence: a tokenized line from the dataset - label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0) ### Data Splits A train/test split was created randomly with a 75/25 split ## Dataset Creation ### Curation Rationale To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance. ### Source Data #### Initial Data Collection and Normalization The corpus used in this paper is made out of English news reports. #### Who are the source language producers? The source data was written by various auditors. ### Annotations #### Annotation process This release of the auditor reviews covers a collection of 4840 sentences. The selected collection of phrases was annotated by 16 people with adequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%. #### Who are the annotators? They were pulled from the SME list, names are held by sue@URL ### Personal and Sensitive Information There is no personal or sensitive information in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases All annotators were from the same institution and so interannotator agreement should be understood with this taken into account. ### Licensing Information License: Demo.Org Proprietary - DO NOT SHARE This dataset is based on the financial phrasebank dataset.
[ "# Dataset Card for Auditor Sentiment", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information", "## Dataset Description\nAuditor review sentiment collected by News Department\n\n- Point of Contact:\nTalked to COE for Auditing, currently sue@URL", "### Dataset Summary\n\nAuditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.", "### Supported Tasks and Leaderboards\n\nSentiment Classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: a tokenized line from the dataset\n- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)", "### Data Splits\n\nA train/test split was created randomly with a 75/25 split", "## Dataset Creation", "### Curation Rationale\n\nTo gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe corpus used in this paper is made out of English news reports.", "#### Who are the source language producers?\n\nThe source data was written by various auditors.", "### Annotations", "#### Annotation process\n\nThis release of the auditor reviews covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%.", "#### Who are the annotators?\n\nThey were pulled from the SME list, names are held by sue@URL", "### Personal and Sensitive Information\n\nThere is no personal or sensitive information in this dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nAll annotators were from the same institution and so interannotator agreement\nshould be understood with this taken into account.", "### Licensing Information\n\nLicense: Demo.Org Proprietary - DO NOT SHARE\n\nThis dataset is based on the financial phrasebank dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n", "# Dataset Card for Auditor Sentiment", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information", "## Dataset Description\nAuditor review sentiment collected by News Department\n\n- Point of Contact:\nTalked to COE for Auditing, currently sue@URL", "### Dataset Summary\n\nAuditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.", "### Supported Tasks and Leaderboards\n\nSentiment Classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: a tokenized line from the dataset\n- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)", "### Data Splits\n\nA train/test split was created randomly with a 75/25 split", "## Dataset Creation", "### Curation Rationale\n\nTo gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe corpus used in this paper is made out of English news reports.", "#### Who are the source language producers?\n\nThe source data was written by various auditors.", "### Annotations", "#### Annotation process\n\nThis release of the auditor reviews covers a collection of 4840\nsentences. The selected collection of phrases was annotated by 16 people with\nadequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%.", "#### Who are the annotators?\n\nThey were pulled from the SME list, names are held by sue@URL", "### Personal and Sensitive Information\n\nThere is no personal or sensitive information in this dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nAll annotators were from the same institution and so interannotator agreement\nshould be understood with this taken into account.", "### Licensing Information\n\nLicense: Demo.Org Proprietary - DO NOT SHARE\n\nThis dataset is based on the financial phrasebank dataset." ]
[ 93, 8, 117, 30, 37, 14, 5, 6, 6, 51, 19, 5, 48, 4, 24, 20, 5, 60, 25, 20, 8, 7, 33, 33 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n# Dataset Card for Auditor Sentiment## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information## Dataset Description\nAuditor review sentiment collected by News Department\n\n- Point of Contact:\nTalked to COE for Auditing, currently sue@URL### Dataset Summary\n\nAuditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.### Supported Tasks and Leaderboards\n\nSentiment Classification### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields\n\n- sentence: a tokenized line from the dataset\n- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)### Data Splits\n\nA train/test split was created randomly with a 75/25 split## Dataset Creation### Curation Rationale\n\nTo gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.### Source Data#### Initial Data Collection and Normalization\n\nThe corpus used in this paper is made out of English news reports.#### Who are the source language producers?\n\nThe source data was written by various auditors.### Annotations" ]
795824409d295424e69005d881d5370f177265b8
annotations_creators: - no-annotation language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: structured song lyrics size_categories: [] source_datasets: [] tags: - lyrics task_categories: - text-generation task_ids: - language-modeling [Needs More Information] # Dataset Card for song_lyrics ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Structured song lyrics ### Supported Tasks and Leaderboards text generation ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
nbsullivan/song_lyrics
[ "region:us" ]
2022-07-21T18:55:40+00:00
{}
2022-07-21T19:19:14+00:00
[]
[]
TAGS #region-us
annotations_creators: - no-annotation language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: structured song lyrics size_categories: [] source_datasets: [] tags: - lyrics task_categories: - text-generation task_ids: - language-modeling # Dataset Card for song_lyrics ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Structured song lyrics ### Supported Tasks and Leaderboards text generation ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for song_lyrics", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nStructured song lyrics", "### Supported Tasks and Leaderboards\n\ntext generation", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "# Dataset Card for song_lyrics", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nStructured song lyrics", "### Supported Tasks and Leaderboards\n\ntext generation", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ 6, 10, 112, 24, 11, 12, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for song_lyrics## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nStructured song lyrics### Supported Tasks and Leaderboards\n\ntext generation### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information" ]
e670508f77f244a24a8bcf100f02011df9d8435b
[Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney): users issue a query in natural language, and the Midjourney bot returns AI-generated images that follow the given description. The raw dataset (with Discord messages) can be found on Kaggle: [Midjourney User Prompts & Generated Images (250k)](https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage). The authors of the scraped dataset have no affiliation to Midjourney. This HuggingFace dataset was [processed](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) from the raw Discord messages to solely include the text prompts issued by the user (thus excluding the generated images and any other metadata). It could be used, for instance, to fine-tune a large language model to produce or auto-complete creative prompts for image generation. Check out [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator), a GPT-2 model fine-tuned on this dataset.
succinctly/midjourney-prompts
[ "license:apache-2.0", "region:us" ]
2022-07-21T19:29:49+00:00
{"license": "apache-2.0"}
2022-07-22T00:49:16+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Midjourney is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server: users issue a query in natural language, and the Midjourney bot returns AI-generated images that follow the given description. The raw dataset (with Discord messages) can be found on Kaggle: Midjourney User Prompts & Generated Images (250k). The authors of the scraped dataset have no affiliation to Midjourney. This HuggingFace dataset was processed from the raw Discord messages to solely include the text prompts issued by the user (thus excluding the generated images and any other metadata). It could be used, for instance, to fine-tune a large language model to produce or auto-complete creative prompts for image generation. Check out succinctly/text2image-prompt-generator, a GPT-2 model fine-tuned on this dataset.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
b9190341f1939b12ce99c0b3120590e9d24033dc
# Dataset Card for "WikiArt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Artificio/WikiArt
[ "region:us" ]
2022-07-21T20:18:50+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "artist", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "style", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "embeddings_pca512", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1659296285.75, "num_examples": 103250}], "download_size": 1711766693, "dataset_size": 1659296285.75}}
2023-01-18T17:13:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "WikiArt" More Information needed
[ "# Dataset Card for \"WikiArt\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"WikiArt\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"WikiArt\"\n\nMore Information needed" ]
35a56f3c865a3b3abdc7e3386804fe2063efd6f2
# Dataset Card for `reviews_with_drift` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place. ### Supported Tasks and Leaderboards `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset.
arize-ai/cifar10_quality_drift
[ "task_categories:image-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|imdb", "language:en", "license:mit", "region:us" ]
2022-07-21T22:00:55+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "sentiment-classification-reviews-with-drift"}
2022-10-25T09:40:25+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us
# Dataset Card for 'reviews_with_drift' ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description ### Dataset Summary This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place. ### Supported Tasks and Leaderboards 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative). ### Languages Text is mainly written in english. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @fjcasti1 for adding this dataset.
[ "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n", "# Dataset Card for 'reviews_with_drift'", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description", "### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.", "### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).", "### Languages\n\nText is mainly written in english.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fjcasti1 for adding this dataset." ]
[ 95, 13, 125, 4, 120, 50, 12, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n# Dataset Card for 'reviews_with_drift'## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).### Languages\n\nText is mainly written in english.## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information" ]
3a0ac3296e467afae7bd4d6ffc6ab795af8904d9
# Dataset Card for NERDE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [NERDE repository](https://github.com/guipaiva/NERDE) - **Point of Contact:** [Guilherme P. Paiva](mailto:[email protected]) ### Dataset Summary NERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@guipaiva](https://github.com/guipaiva) for adding this dataset.
Gpaiva/NERDE
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pt", "license:cc-by-4.0", "ner", "portuguese-ner", "economic-defense", "region:us" ]
2022-07-22T00:50:19+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["pt"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "NERDE", "tags": ["ner", "portuguese-ner", "economic-defense"]}
2022-07-28T00:27:18+00:00
[]
[ "pt" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #ner #portuguese-ner #economic-defense #region-us
# Dataset Card for NERDE ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: NERDE repository - Point of Contact: Guilherme P. Paiva ### Dataset Summary NERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade) ### Supported Tasks and Leaderboards ### Languages The language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @guipaiva for adding this dataset.
[ "# Dataset Card for NERDE", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: NERDE repository\n- Point of Contact: Guilherme P. Paiva", "### Dataset Summary\n\nNERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade)", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @guipaiva for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #ner #portuguese-ner #economic-defense #region-us \n", "# Dataset Card for NERDE", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: NERDE repository\n- Point of Contact: Guilherme P. Paiva", "### Dataset Summary\n\nNERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade)", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @guipaiva for adding this dataset." ]
[ 114, 7, 125, 26, 51, 10, 35, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #ner #portuguese-ner #economic-defense #region-us \n# Dataset Card for NERDE## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: NERDE repository\n- Point of Contact: Guilherme P. Paiva### Dataset Summary\n\nNERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade)### Supported Tasks and Leaderboards### Languages\n\nThe language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information" ]
49ea9e40149871828d02aed166988c67dcda75c4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: distilbert-base-uncased-finetuned-sst-2-english * Dataset: sst2 * Config: default * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-sst2-ee5c821a-11545531
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T05:30:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sst2"], "eval_info": {"task": "multi_class_classification", "model": "distilbert-base-uncased-finetuned-sst-2-english", "metrics": [], "dataset_name": "sst2", "dataset_config": "default", "dataset_split": "train", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-07-22T05:33:53+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: distilbert-base-uncased-finetuned-sst-2-english * Dataset: sst2 * Config: default * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Neez for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: distilbert-base-uncased-finetuned-sst-2-english\n* Dataset: sst2\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Neez for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: distilbert-base-uncased-finetuned-sst-2-english\n* Dataset: sst2\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Neez for evaluating this model." ]
[ 13, 99, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: distilbert-base-uncased-finetuned-sst-2-english\n* Dataset: sst2\n* Config: default\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Neez for evaluating this model." ]
97197c4a27472a1cb112d4f384ba6f70e040b2a6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: tuner007/pegasus_summarizer * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Neez](https://huggingface.co/Neez) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-7c900a64-11555532
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T06:39:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "tuner007/pegasus_summarizer", "metrics": ["accuracy", "f1", "precision", "recall"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-07-23T21:08:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: tuner007/pegasus_summarizer * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Neez for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: tuner007/pegasus_summarizer\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Neez for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: tuner007/pegasus_summarizer\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Neez for evaluating this model." ]
[ 13, 90, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: tuner007/pegasus_summarizer\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Neez for evaluating this model." ]
d1b54f2b452230e082fbdc30fe42b0f96c44ff16
This dataset provides information of all the spaces (~6,200 at time of snapshot) created on [HuggingFace Spaces](https://huggingface.co/spaces) 🤗. Most of the data comes from a public API endpoint while some of the data is enriched by web scraping. The dataset is intended to provide a snapshot of the spaces and was last updated in first week of *July-2022*. Along with the name of the space, the dataset consists of following columns: - likes (number of likes on the space) - sdk (streamlit,gradio or other) - status (was running successfully or had error when snapshot was taken) - total_commits (number of commits in the space) - last_commit (when did last commit happen) - community_interactions (number of interactions in the newly introduced Community tab) Apart from these, we have also added some post-processing columns (where space was using gradio): - inputs (Image/Text/Slider etc) - outputs (Image/Audio/Textbox etc) - ai_ml_reqs (If the requirements.txt contain a popular ML repo dependency like: torch, tensorflow, pandas, sklearn, scipy etc) Contributors: - [Abdullah Meda](https://www.linkedin.com/in/abdmeda/) - [Ayush Ranwa](https://twitter.com/Ayushranwa6) - [Deepak Rawat](https://twitter.com/dsr_ai) - [Kartik Godawat](https://twitter.com/kartik_godawat) Please reach out to us for any queries or discussions.
deepklarity/huggingface-spaces-dataset
[ "license:cc", "region:us" ]
2022-07-22T07:45:29+00:00
{"license": "cc"}
2022-07-22T08:10:17+00:00
[]
[]
TAGS #license-cc #region-us
This dataset provides information of all the spaces (~6,200 at time of snapshot) created on HuggingFace Spaces . Most of the data comes from a public API endpoint while some of the data is enriched by web scraping. The dataset is intended to provide a snapshot of the spaces and was last updated in first week of *July-2022*. Along with the name of the space, the dataset consists of following columns: - likes (number of likes on the space) - sdk (streamlit,gradio or other) - status (was running successfully or had error when snapshot was taken) - total_commits (number of commits in the space) - last_commit (when did last commit happen) - community_interactions (number of interactions in the newly introduced Community tab) Apart from these, we have also added some post-processing columns (where space was using gradio): - inputs (Image/Text/Slider etc) - outputs (Image/Audio/Textbox etc) - ai_ml_reqs (If the URL contain a popular ML repo dependency like: torch, tensorflow, pandas, sklearn, scipy etc) Contributors: - Abdullah Meda - Ayush Ranwa - Deepak Rawat - Kartik Godawat Please reach out to us for any queries or discussions.
[]
[ "TAGS\n#license-cc #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-cc #region-us \n" ]
f2f8f031c380b6d0ccd2a8102a40717e4a036884
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-7ad816c0-11585539
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:33:29+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
add96f0971c3921b3b77150838ef0d0494986fa9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-7ad816c0-11585538
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:34:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 90, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
7d2e66ed02c4ff5b893295433a4e2f9f7aaa3592
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-7ad816c0-11585540
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:33:32+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 97, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
ae4442bb10bc1cd57779ad99594d94db75420667
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-94d8b010-11595541
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:34:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 95, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
91aaa4a325ad414cfcde8690892b7dedb5425530
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-94d8b010-11595542
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:34:19+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 96, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
c9fbf6541ad051a61f3bea8ea553af895ddb0449
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-94d8b010-11595543
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T08:31:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T08:34:25+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/distilbert-base-cased-distilled-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 102, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/distilbert-base-cased-distilled-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
b87c2d6f00929ca0f2f43e8d1a3532e4b0df069f
XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 other languages. This dataset is released by Meta AI. # Languages ru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my. # Data Splits This dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment. # Access English StoryCloze Please request the original English StoryCloze dataset through the [official channel](https://cs.rochester.edu/nlp/rocstories/). You can create a split of the en data following our data split scheme using the following commands: ``` head -361 spring2016.val.tsv > spring2016.val.en.tsv.split_20_80_train.tsv head -1 spring2016.val.tsv > spring2016.val.en.tsv.split_20_80_eval.tsv tail -1510 spring2016.val.tsv >> spring2016.val.en.tsv.split_20_80_eval.tsv ``` # Licence XStoryCloze is opensourced under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode), the same license as the original English StoryCloze. # Citation If you use XStoryCloze in your work, please cite ``` @article{DBLP:journals/corr/abs-2112-10668, author = {Xi Victoria Lin and Todor Mihaylov and Mikel Artetxe and Tianlu Wang and Shuohui Chen and Daniel Simig and Myle Ott and Naman Goyal and Shruti Bhosale and Jingfei Du and Ramakanth Pasunuru and Sam Shleifer and Punit Singh Koura and Vishrav Chaudhary and Brian O'Horo and Jeff Wang and Luke Zettlemoyer and Zornitsa Kozareva and Mona T. Diab and Veselin Stoyanov and Xian Li}, title = {Few-shot Learning with Multilingual Language Models}, journal = {CoRR}, volume = {abs/2112.10668}, year = {2021}, url = {https://arxiv.org/abs/2112.10668}, eprinttype = {arXiv}, eprint = {2112.10668}, timestamp = {Tue, 04 Jan 2022 15:59:27 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Muennighoff/xstory_cloze_data
[ "arxiv:2112.10668", "region:us" ]
2022-07-22T08:56:03+00:00
{}
2022-07-22T09:00:22+00:00
[ "2112.10668" ]
[]
TAGS #arxiv-2112.10668 #region-us
XStoryCloze consists of the professionally translated version of the English StoryCloze dataset (Spring 2016 version) to 10 other languages. This dataset is released by Meta AI. # Languages ru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my. # Data Splits This dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment. # Access English StoryCloze Please request the original English StoryCloze dataset through the official channel. You can create a split of the en data following our data split scheme using the following commands: # Licence XStoryCloze is opensourced under CC BY-SA 4.0, the same license as the original English StoryCloze. If you use XStoryCloze in your work, please cite
[ "# Languages\nru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my.", "# Data Splits\nThis dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment.", "# Access English StoryCloze\nPlease request the original English StoryCloze dataset through the official channel. You can create a split of the en data following our data split scheme using the following commands:", "# Licence\nXStoryCloze is opensourced under CC BY-SA 4.0, the same license as the original English StoryCloze.\n\nIf you use XStoryCloze in your work, please cite" ]
[ "TAGS\n#arxiv-2112.10668 #region-us \n", "# Languages\nru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my.", "# Data Splits\nThis dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment.", "# Access English StoryCloze\nPlease request the original English StoryCloze dataset through the official channel. You can create a split of the en data following our data split scheme using the following commands:", "# Licence\nXStoryCloze is opensourced under CC BY-SA 4.0, the same license as the original English StoryCloze.\n\nIf you use XStoryCloze in your work, please cite" ]
[ 15, 33, 74, 41, 42 ]
[ "passage: TAGS\n#arxiv-2112.10668 #region-us \n# Languages\nru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my.# Data Splits\nThis dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment.# Access English StoryCloze\nPlease request the original English StoryCloze dataset through the official channel. You can create a split of the en data following our data split scheme using the following commands:# Licence\nXStoryCloze is opensourced under CC BY-SA 4.0, the same license as the original English StoryCloze.\n\nIf you use XStoryCloze in your work, please cite" ]
a16580eb510078482c7625c086cb75ca82c53007
# Dataset Card for Shadertoys-fine ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Licensing Information](#licensing-information) ## Dataset Description - **Repository:** https://github.com/Vipitis/project (private placeholder) ### Dataset Summary fine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints. ### Supported Tasks and Leaderboards `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages. ### Languages - English (names, comments) - Shadercode **programming** language ## Dataset Structure ### Data Instances A data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments) ``` { 'name': '<type> <name>', 'code': '<type> <name>(<inputs>) { <body> return <outputs>; }\n', 'source': 'https://shadertoy.com/view/<shaderID>', 'author': '<username>' } ``` A data point in the `return_completion` subset for the return-completion task in [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval) includes just two features: ``` { 'body': '<type> <name> <type> <name>(<inputs>) { <body> return', 'return_statment': ' <outputs>: }\n', } ``` ### Data Fields - 'name' funciton identifier composed of the type and the name of the function - 'code' the raw code (including comments) of function. - 'source' URL to the shader. It might be on a different renderpass - 'author' username of the shader author - 'body' the body of the function without the return statement (no comments) - 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator. ### Data Splits Currently available (shuffled): - train (85.0%) - test (15.0%) These splits should be indexed the same across both subsets. So if you are fine-tuning on the `fine` subset you won't get exposed to the `return_completion` test split. However there are many duplicates among both subsets and splits. ## Dataset Creation Data retrieved starting 2022-07-20 ### Source Data #### Initial Data Collection and Normalization All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then by looking for keywords and counting curly brackets to figure out what is part of a function and what isn't. #### Who are the source language producers? Shadertoy.com contributers which publish shaders as 'public+API' ## Licensing Information The Default [licnese for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licensis.
Vipitis/Shadertoys-fine
[ "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "size_categories:100K<n<1M", "language:en", "language:code", "license:cc-by-nc-sa-3.0", "code", "region:us" ]
2022-07-22T09:45:36+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en", "code"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": [], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "Shadertoys-fine", "tags": ["code"], "dataset_info": [{"config_name": "default", "features": [{"name": "name", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "test"}], "download_size": 154529204, "dataset_size": 0}, {"config_name": "fine", "features": [{"name": "name", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119963236, "num_examples": 226910}, {"name": "test", "num_bytes": 20003783, "num_examples": 38356}], "download_size": 154529204, "dataset_size": 139967019}, {"config_name": "return_completion", "features": [{"name": "body", "dtype": "string"}, {"name": "return_statement", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37597125, "num_examples": 84843}, {"name": "test", "num_bytes": 6360131, "num_examples": 14248}], "download_size": 154529204, "dataset_size": 43957256}]}
2023-05-04T21:37:17+00:00
[]
[ "en", "code" ]
TAGS #task_categories-text-generation #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-100K<n<1M #language-English #language-code #license-cc-by-nc-sa-3.0 #code #region-us
# Dataset Card for Shadertoys-fine ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Source Data - Licensing Information ## Dataset Description - Repository: URL (private placeholder) ### Dataset Summary fine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints. ### Supported Tasks and Leaderboards 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages. ### Languages - English (names, comments) - Shadercode programming language ## Dataset Structure ### Data Instances A data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments) A data point in the 'return_completion' subset for the return-completion task in ShaderEval includes just two features: ### Data Fields - 'name' funciton identifier composed of the type and the name of the function - 'code' the raw code (including comments) of function. - 'source' URL to the shader. It might be on a different renderpass - 'author' username of the shader author - 'body' the body of the function without the return statement (no comments) - 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator. ### Data Splits Currently available (shuffled): - train (85.0%) - test (15.0%) These splits should be indexed the same across both subsets. So if you are fine-tuning on the 'fine' subset you won't get exposed to the 'return_completion' test split. However there are many duplicates among both subsets and splits. ## Dataset Creation Data retrieved starting 2022-07-20 ### Source Data #### Initial Data Collection and Normalization All data was collected via the URL API and then by looking for keywords and counting curly brackets to figure out what is part of a function and what isn't. #### Who are the source language producers? URL contributers which publish shaders as 'public+API' ## Licensing Information The Default licnese for each Shader is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licensis.
[ "# Dataset Card for Shadertoys-fine", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data\n- Licensing Information", "## Dataset Description\n\n- Repository: URL (private placeholder)", "### Dataset Summary\n\nfine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints.", "### Supported Tasks and Leaderboards\n\n 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.", "### Languages\n\n- English (names, comments)\n- Shadercode programming language", "## Dataset Structure", "### Data Instances\n\nA data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments)\n\n\nA data point in the 'return_completion' subset for the return-completion task in ShaderEval includes just two features:", "### Data Fields\n\n- 'name' funciton identifier composed of the type and the name of the function\n- 'code' the raw code (including comments) of function.\n- 'source' URL to the shader. It might be on a different renderpass\n- 'author' username of the shader author\n\n- 'body' the body of the function without the return statement (no comments)\n- 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator.", "### Data Splits\n\nCurrently available (shuffled):\n - train (85.0%)\n - test (15.0%)\n\nThese splits should be indexed the same across both subsets. So if you are fine-tuning on the 'fine' subset you won't get exposed to the 'return_completion' test split. However there are many duplicates among both subsets and splits.", "## Dataset Creation\n\nData retrieved starting 2022-07-20", "### Source Data", "#### Initial Data Collection and Normalization\n\nAll data was collected via the URL API and then by looking for keywords and counting curly brackets to figure out what is part of a function and what isn't.", "#### Who are the source language producers?\n\nURL contributers which publish shaders as 'public+API'", "## Licensing Information\n\nThe Default licnese for each Shader is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licensis." ]
[ "TAGS\n#task_categories-text-generation #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-100K<n<1M #language-English #language-code #license-cc-by-nc-sa-3.0 #code #region-us \n", "# Dataset Card for Shadertoys-fine", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data\n- Licensing Information", "## Dataset Description\n\n- Repository: URL (private placeholder)", "### Dataset Summary\n\nfine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints.", "### Supported Tasks and Leaderboards\n\n 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.", "### Languages\n\n- English (names, comments)\n- Shadercode programming language", "## Dataset Structure", "### Data Instances\n\nA data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments)\n\n\nA data point in the 'return_completion' subset for the return-completion task in ShaderEval includes just two features:", "### Data Fields\n\n- 'name' funciton identifier composed of the type and the name of the function\n- 'code' the raw code (including comments) of function.\n- 'source' URL to the shader. It might be on a different renderpass\n- 'author' username of the shader author\n\n- 'body' the body of the function without the return statement (no comments)\n- 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator.", "### Data Splits\n\nCurrently available (shuffled):\n - train (85.0%)\n - test (15.0%)\n\nThese splits should be indexed the same across both subsets. So if you are fine-tuning on the 'fine' subset you won't get exposed to the 'return_completion' test split. However there are many duplicates among both subsets and splits.", "## Dataset Creation\n\nData retrieved starting 2022-07-20", "### Source Data", "#### Initial Data Collection and Normalization\n\nAll data was collected via the URL API and then by looking for keywords and counting curly brackets to figure out what is part of a function and what isn't.", "#### Who are the source language producers?\n\nURL contributers which publish shaders as 'public+API'", "## Licensing Information\n\nThe Default licnese for each Shader is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licensis." ]
[ 76, 11, 63, 16, 34, 48, 19, 6, 81, 127, 89, 13, 4, 49, 23, 51 ]
[ "passage: TAGS\n#task_categories-text-generation #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-100K<n<1M #language-English #language-code #license-cc-by-nc-sa-3.0 #code #region-us \n# Dataset Card for Shadertoys-fine## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data\n- Licensing Information## Dataset Description\n\n- Repository: URL (private placeholder)### Dataset Summary\n\nfine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints.### Supported Tasks and Leaderboards\n\n 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.### Languages\n\n- English (names, comments)\n- Shadercode programming language## Dataset Structure### Data Instances\n\nA data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments)\n\n\nA data point in the 'return_completion' subset for the return-completion task in ShaderEval includes just two features:### Data Fields\n\n- 'name' funciton identifier composed of the type and the name of the function\n- 'code' the raw code (including comments) of function.\n- 'source' URL to the shader. It might be on a different renderpass\n- 'author' username of the shader author\n\n- 'body' the body of the function without the return statement (no comments)\n- 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator." ]
94f5828caf1fed6c4e59499abdfcd873a9c030a3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-b21ddcda-11615545
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T10:14:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T10:17:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 94, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9c16af46e39ca7b77c67d091885bafd8cb05ee48
Simpe Sentimental analysis Dataset checking with AUtoTrain Pipeline
ameerazam08/autotrain-data-imdb
[ "region:us" ]
2022-07-22T10:43:35+00:00
{}
2022-08-08T03:19:44+00:00
[]
[]
TAGS #region-us
Simpe Sentimental analysis Dataset checking with AUtoTrain Pipeline
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
8bb76e594b68147f1a430e86829d07189622b90d
# Dataset Card for "story_cloze" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning.This test requires a system to choose the correct ending to a four-sentence story. ### Data Instances - **Size of downloaded dataset files:** 2.03 MB - **Size of the generated dataset:** 2.03 MB - **Total amount of disk used:** 2.05 MB An example of 'train' looks as follows. ``` {'answer_right_ending': 1, 'input_sentence_1': 'Rick grew up in a troubled household.', 'input_sentence_2': 'He never found good support in family, and turned to gangs.', 'input_sentence_3': "It wasn't long before Rick got shot in a robbery.", 'input_sentence_4': 'The incident caused him to turn a new leaf.', 'sentence_quiz1': 'He is happy now.', 'sentence_quiz2': 'He joined a gang.', 'story_id': '138d5bfb-05cc-41e3-bf2c-fa85ebad14e2'} ``` ### Data Fields The data fields are the same among all splits. - `input_sentence_1`: The first statement in the story. - `input_sentence_2`: The second statement in the story. - `input_sentence_3`: The third statement in the story. - `input_sentence_4`: The forth statement in the story. - `sentence_quiz1`: first possible continuation of the story. - `sentence_quiz2`: second possible continuation of the story. - `answer_right_ending`: correct possible ending; either 1 or 2. - `story_id`: story id. ### Data Splits | name |validation |test| |-------|-----:|---:| |lang|1871|1871|
Muennighoff/xstory_cloze
[ "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "language:es", "language:eu", "language:hi", "language:id", "language:zh", "language:ru", "language:my", "license:unknown", "other-story-completion", "region:us" ]
2022-07-22T10:52:19+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar", "es", "eu", "hi", "id", "zh", "ru", "my"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_ids": [], "tags": ["other-story-completion"]}
2022-10-20T18:44:18+00:00
[]
[ "ar", "es", "eu", "hi", "id", "zh", "ru", "my" ]
TAGS #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #language-Spanish #language-Basque #language-Hindi #language-Indonesian #language-Chinese #language-Russian #language-Burmese #license-unknown #other-story-completion #region-us
Dataset Card for "story\_cloze" =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- ### Dataset Summary Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning.This test requires a system to choose the correct ending to a four-sentence story. ### Data Instances * Size of downloaded dataset files: 2.03 MB * Size of the generated dataset: 2.03 MB * Total amount of disk used: 2.05 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. * 'input\_sentence\_1': The first statement in the story. * 'input\_sentence\_2': The second statement in the story. * 'input\_sentence\_3': The third statement in the story. * 'input\_sentence\_4': The forth statement in the story. * 'sentence\_quiz1': first possible continuation of the story. * 'sentence\_quiz2': second possible continuation of the story. * 'answer\_right\_ending': correct possible ending; either 1 or 2. * 'story\_id': story id. ### Data Splits
[ "### Dataset Summary\n\n\nStory Cloze Test' is a new commonsense reasoning framework for evaluating story understanding,\nstory generation, and script learning.This test requires a system to choose the correct ending\nto a four-sentence story.", "### Data Instances\n\n\n* Size of downloaded dataset files: 2.03 MB\n* Size of the generated dataset: 2.03 MB\n* Total amount of disk used: 2.05 MB\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'input\\_sentence\\_1': The first statement in the story.\n* 'input\\_sentence\\_2': The second statement in the story.\n* 'input\\_sentence\\_3': The third statement in the story.\n* 'input\\_sentence\\_4': The forth statement in the story.\n* 'sentence\\_quiz1': first possible continuation of the story.\n* 'sentence\\_quiz2': second possible continuation of the story.\n* 'answer\\_right\\_ending': correct possible ending; either 1 or 2.\n* 'story\\_id': story id.", "### Data Splits" ]
[ "TAGS\n#annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #language-Spanish #language-Basque #language-Hindi #language-Indonesian #language-Chinese #language-Russian #language-Burmese #license-unknown #other-story-completion #region-us \n", "### Dataset Summary\n\n\nStory Cloze Test' is a new commonsense reasoning framework for evaluating story understanding,\nstory generation, and script learning.This test requires a system to choose the correct ending\nto a four-sentence story.", "### Data Instances\n\n\n* Size of downloaded dataset files: 2.03 MB\n* Size of the generated dataset: 2.03 MB\n* Total amount of disk used: 2.05 MB\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'input\\_sentence\\_1': The first statement in the story.\n* 'input\\_sentence\\_2': The second statement in the story.\n* 'input\\_sentence\\_3': The third statement in the story.\n* 'input\\_sentence\\_4': The forth statement in the story.\n* 'sentence\\_quiz1': first possible continuation of the story.\n* 'sentence\\_quiz2': second possible continuation of the story.\n* 'answer\\_right\\_ending': correct possible ending; either 1 or 2.\n* 'story\\_id': story id.", "### Data Splits" ]
[ 105, 50, 52, 167, 5 ]
[ "passage: TAGS\n#annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #language-Spanish #language-Basque #language-Hindi #language-Indonesian #language-Chinese #language-Russian #language-Burmese #license-unknown #other-story-completion #region-us \n### Dataset Summary\n\n\nStory Cloze Test' is a new commonsense reasoning framework for evaluating story understanding,\nstory generation, and script learning.This test requires a system to choose the correct ending\nto a four-sentence story.### Data Instances\n\n\n* Size of downloaded dataset files: 2.03 MB\n* Size of the generated dataset: 2.03 MB\n* Total amount of disk used: 2.05 MB\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'input\\_sentence\\_1': The first statement in the story.\n* 'input\\_sentence\\_2': The second statement in the story.\n* 'input\\_sentence\\_3': The third statement in the story.\n* 'input\\_sentence\\_4': The forth statement in the story.\n* 'sentence\\_quiz1': first possible continuation of the story.\n* 'sentence\\_quiz2': second possible continuation of the story.\n* 'answer\\_right\\_ending': correct possible ending; either 1 or 2.\n* 'story\\_id': story id.### Data Splits" ]
de17e62a0b8f40bae1ff1bffd42916d46adc62a2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nbroad/deberta-v3-xsmall-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-a5d9cc45-11645552
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T12:14:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/deberta-v3-xsmall-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T12:17:28+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: nbroad/deberta-v3-xsmall-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nbroad for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nbroad/deberta-v3-xsmall-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nbroad for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nbroad/deberta-v3-xsmall-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nbroad for evaluating this model." ]
[ 13, 100, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nbroad/deberta-v3-xsmall-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nbroad for evaluating this model." ]
850e6e9d4e72b0b1bd5b8ecebdb169cc0afecc55
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-056210f3-11655553
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T14:07:34+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T14:10:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: distilbert-base-cased-distilled-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 93, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: distilbert-base-cased-distilled-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
0a49812b2507cee6824dbd859214a6dc75c3a32f
# Dataset Card for Common Voice Corpus 10.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://arxiv.org/abs/1912.06670 - **Leaderboard:** https://paperswithcode.com/dataset/common-voice - **Point of Contact:** [Anton Lozhkov](mailto:[email protected]) ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added. Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench) ### Languages ``` Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`. ```python { 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5', 'path': 'et/clips/common_voice_et_18318995.mp3', 'audio': { 'path': 'et/clips/common_voice_et_18318995.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000 }, 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': '', 'locale': 'et', 'segment': '' } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `sentence` (`string`): The sentence the user was prompted to speak `up_votes` (`int64`): How many upvotes the audio file has received from reviewers `down_votes` (`int64`): How many downvotes the audio file has received from reviewers `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`) `gender` (`string`): The gender of the speaker `accent` (`string`): Accent of the speaker `locale` (`string`): The locale of the speaker `segment` (`string`): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ```python from datasets import load_dataset ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True) def prepare_dataset(batch): """Function to preprocess the dataset with the .map method""" transcription = batch["sentence"] if transcription.startswith('"') and transcription.endswith('"'): # we can remove trailing quotation marks as they do not affect the transcription transcription = transcription[1:-1] if transcription[-1] not in [".", "?", "!"]: # append a full-stop to sentences that do not end in punctuation transcription = transcription + "." batch["sentence"] = transcription return batch ds = ds.map(prepare_dataset, desc="preprocess dataset") ``` ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
mozilla-foundation/common_voice_10_0
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "source_datasets:extended|common_voice", "license:cc0-1.0", "arxiv:1912.06670", "region:us" ]
2022-07-22T14:10:26+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["n<1K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["100K<n<1M"], "bg": ["1K<n<10K"], "bn": ["100K<n<1M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["10K<n<100K"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["1K<n<10K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["100K<n<1M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["10K<n<100K"], "ga-IE": ["1K<n<10K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["1K<n<10K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "it": ["100K<n<1M"], "ja": ["10K<n<100K"], "ka": ["1K<n<10K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lt": ["10K<n<100K"], "lv": ["1K<n<10K"], "mdf": ["n<1K"], "mhr": ["10K<n<100K"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["n<1K"], "sk": ["10K<n<100K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "tig": ["n<1K"], "tok": ["1K<n<10K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 10.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "it", "ja", "ka", "kab", "kk", "kmr", "ky", "lg", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "or", "pa-IN", "pl", "pt", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "sl", "sr", "sv-SE", "sw", "ta", "th", "tig", "tok", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "vot", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
2023-07-29T15:00:14+00:00
[ "1912.06670" ]
[]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us
# Dataset Card for Common Voice Corpus 10.0 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: URL - Point of Contact: Anton Lozhkov ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added. Take a look at the Languages page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the Speech Bench ### Languages ## Dataset Structure ### Data Instances A typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'. ### Data Fields 'client_id' ('string'): An id for which client (voice) made the recording 'path' ('string'): The path to the audio file 'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. 'sentence' ('string'): The sentence the user was prompted to speak 'up_votes' ('int64'): How many upvotes the audio file has received from reviewers 'down_votes' ('int64'): How many downvotes the audio file has received from reviewers 'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties') 'gender' ('string'): The gender of the speaker 'accent' ('string'): Accent of the speaker 'locale' ('string'): The locale of the speaker 'segment' ('string'): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Public Domain, CC-0
[ "# Dataset Card for Common Voice Corpus 10.0", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Anton Lozhkov", "### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.", "### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Speech Bench", "### Languages", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.", "### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field", "### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.", "## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nPublic Domain, CC-0" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n", "# Dataset Card for Common Voice Corpus 10.0", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Anton Lozhkov", "### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.", "### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Speech Bench", "### Languages", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.", "### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field", "### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.", "## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nPublic Domain, CC-0" ]
[ 87, 10, 120, 32, 108, 30, 4, 6, 77, 378, 145, 233, 5, 7, 4, 10, 10, 5, 5, 9, 42, 8, 41, 8, 7, 5, 6, 11 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n# Dataset Card for Common Voice Corpus 10.0## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Anton Lozhkov### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Speech Bench### Languages## Dataset Structure### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.", "passage: ### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.## Considerations for Using the Data" ]
bb5a0bf1924a55a85433166cacc8384fd7c099dc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nbroad/xdistil-l12-h384-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-4938eeea-11665554
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T14:10:40+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/xdistil-l12-h384-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T14:13:27+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: nbroad/xdistil-l12-h384-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nbroad for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nbroad/xdistil-l12-h384-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nbroad for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nbroad/xdistil-l12-h384-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nbroad for evaluating this model." ]
[ 13, 100, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nbroad/xdistil-l12-h384-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nbroad for evaluating this model." ]
0513e0c12e945fa315e4fb166e3d741cb4413105
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-base-squad2-distilled * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@yjernite](https://huggingface.co/yjernite) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-b7567fd1-11675555
[ "autotrain", "evaluation", "region:us" ]
2022-07-22T14:51:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2-distilled", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-07-22T14:54:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-base-squad2-distilled * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @yjernite for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @yjernite for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @yjernite for evaluating this model." ]
[ 13, 97, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @yjernite for evaluating this model." ]
ef655a3bfc18d977bb7d657ab87a6de404c883fc
# Dataset Card for Hansard speech ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://evanodell.com/projects/datasets/hansard-data/ - **Repository:** https://github.com/evanodell/hansard-data3 - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Evan Odell](https://github.com/evanodell) ### Dataset Summary A dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage > Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented "as is". ### Supported Tasks and Leaderboards - `text-classification`: This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types - `language-modeling`: This dataset can contribute to the training or the evaluation of language models for historical texts. ### Languages `en:GB` ## Dataset Structure ### Data Instances ``` { 'id': 'uk.org.publicwhip/debate/1979-05-17a.390.0', 'speech': "Since the Minister for Consumer Affairs said earlier that the bread price rise would be allowed, in view of developing unemployment in the baking industry, and since the Mother's Pride bakery in my constituency is about to close, will the right hon. Gentleman give us a firm assurance that there will be an early debate on the future of the industry, so that the Government may announce that, thanks to the price rise, those workers will not now be put out of work?", 'display_as': 'Eric Heffer', 'party': 'Labour', 'constituency': 'Liverpool, Walton', 'mnis_id': '725', 'date': '1979-05-17', 'time': '', 'colnum': '390', 'speech_class': 'Speech', 'major_heading': 'BUSINESS OF THE HOUSE', 'minor_heading': '', 'oral_heading': '', 'year': '1979', 'hansard_membership_id': '5612', 'speakerid': 'uk.org.publicwhip/member/11615', 'person_id': '', 'speakername': 'Mr. Heffer', 'url': '', 'government_posts': [], 'opposition_posts': [], 'parliamentary_posts': ['Member, Labour Party National Executive Committee'] } ``` ### Data Fields |Variable|Description| |---|---| |id|The ID as assigned by mysociety| |speech|The text of the speech| |display_as| The standardised name of the MP.| |party|The party an MP is member of at time of speech| |constituency| Constituency represented by MP at time of speech| |mnis_id| The MP's Members Name Information Service number| |date|Date of speech| |time|Time of speech| |colnum |Column number in hansard record| |speech_class |Type of speech| |major_heading| Major debate heading| |minor_heading| Minor debate heading| |oral_heading| Oral debate heading| |year |Year of speech| |hansard_membership_id| ID used by mysociety| |speakerid |ID used by mysociety| |person_id |ID used by mysociety| |speakername| MP name as appeared in Hansard record for speech| |url| link to speech| |government_posts| Government posts held by MP (list)| |opposition_posts |Opposition posts held by MP (list)| |parliamentary_posts| Parliamentary posts held by MP (list)| ### Data Splits Train: 2694375 ## Dataset Creation ### Curation Rationale This dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution. ### Source Data #### Initial Data Collection and Normalization The dataset is created by getting the data from [data.parliament.uk](http://data.parliament.uk/membersdataplatform/memberquery.aspx). There is no normalization. #### Who are the source language producers? [N/A] ### Annotations #### Annotation process None #### Who are the annotators? [N/A] ### Personal and Sensitive Information This is public information, so there should not be any personal and sensitive information ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to understand how language use and society's views have changed over time. ### Discussion of Biases Because of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society. ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators This dataset was built on top of [parlparse](https://github.com/mysociety/parlparse) by [Evan Odell](https://github.com/evanodell) ### Licensing Information Creative Commons Attribution 4.0 International License ### Citation Information ``` @misc{odell, evan_2021, title={Hansard Speeches 1979-2021: Version 3.1.0}, DOI={10.5281/zenodo.4843485}, abstractNote={<p>Full details are available at <a href="https://evanodell.com/projects/datasets/hansard-data">https://evanodell.com/projects/datasets/hansard-data</a></p> <p><strong>Version 3.1.0 contains the following changes:</strong></p> <p>- Coverage up to the end of April 2021</p>}, note={This release is an update of previously released datasets. See full documentation for details.}, publisher={Zenodo}, author={Odell, Evan}, year={2021}, month={May} } ``` Thanks to [@shamikbose](https://github.com/shamikbose) for adding this dataset.
biglam/hansard_speech
[ "task_categories:text-classification", "task_categories:text-generation", "task_ids:multi-class-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "speeches", "politics", "parliament", "British", "region:us" ]
2022-07-22T20:57:59+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": ["multi-class-classification", "language-modeling", "masked-language-modeling"], "pretty_name": "Hansard Speeches", "tags": ["speeches", "politics", "parliament", "British"]}
2022-07-27T11:30:30+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text-generation #task_ids-multi-class-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #speeches #politics #parliament #British #region-us
Dataset Card for Hansard speech =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: Evan Odell ### Dataset Summary A dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage > > Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented "as is". > > > ### Supported Tasks and Leaderboards * 'text-classification': This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types * 'language-modeling': This dataset can contribute to the training or the evaluation of language models for historical texts. ### Languages 'en:GB' Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Train: 2694375 Dataset Creation ---------------- ### Curation Rationale This dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution. ### Source Data #### Initial Data Collection and Normalization The dataset is created by getting the data from URL. There is no normalization. #### Who are the source language producers? [N/A] ### Annotations #### Annotation process None #### Who are the annotators? [N/A] ### Personal and Sensitive Information This is public information, so there should not be any personal and sensitive information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset is to understand how language use and society's views have changed over time. ### Discussion of Biases Because of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society. ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators This dataset was built on top of parlparse by Evan Odell ### Licensing Information Creative Commons Attribution 4.0 International License Thanks to @shamikbose for adding this dataset.
[ "### Dataset Summary\n\n\nA dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage\n\n\n\n> \n> Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented \"as is\".\n> \n> \n>", "### Supported Tasks and Leaderboards\n\n\n* 'text-classification': This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types\n* 'language-modeling': This dataset can contribute to the training or the evaluation of language models for historical texts.", "### Languages\n\n\n'en:GB'\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nTrain: 2694375\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset is created by getting the data from URL. There is no normalization.", "#### Who are the source language producers?\n\n\n[N/A]", "### Annotations", "#### Annotation process\n\n\nNone", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nThis is public information, so there should not be any personal and sensitive information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to understand how language use and society's views have changed over time.", "### Discussion of Biases\n\n\nBecause of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was built on top of parlparse by Evan Odell", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International License\n\n\nThanks to @shamikbose for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text-generation #task_ids-multi-class-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #speeches #politics #parliament #British #region-us \n", "### Dataset Summary\n\n\nA dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage\n\n\n\n> \n> Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented \"as is\".\n> \n> \n>", "### Supported Tasks and Leaderboards\n\n\n* 'text-classification': This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types\n* 'language-modeling': This dataset can contribute to the training or the evaluation of language models for historical texts.", "### Languages\n\n\n'en:GB'\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nTrain: 2694375\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset is created by getting the data from URL. There is no normalization.", "#### Who are the source language producers?\n\n\n[N/A]", "### Annotations", "#### Annotation process\n\n\nNone", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nThis is public information, so there should not be any personal and sensitive information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to understand how language use and society's views have changed over time.", "### Discussion of Biases\n\n\nBecause of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was built on top of parlparse by Evan Odell", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International License\n\n\nThanks to @shamikbose for adding this dataset." ]
[ 142, 84, 74, 16, 6, 5, 16, 72, 4, 28, 15, 5, 7, 14, 33, 29, 35, 14, 21, 25 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text-generation #task_ids-multi-class-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #speeches #politics #parliament #British #region-us \n### Dataset Summary\n\n\nA dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage\n\n\n\n> \n> Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented \"as is\".\n> \n> \n>### Supported Tasks and Leaderboards\n\n\n* 'text-classification': This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types\n* 'language-modeling': This dataset can contribute to the training or the evaluation of language models for historical texts.### Languages\n\n\n'en:GB'\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields### Data Splits\n\n\nTrain: 2694375\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThis dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution.### Source Data#### Initial Data Collection and Normalization\n\n\nThe dataset is created by getting the data from URL. There is no normalization.#### Who are the source language producers?\n\n\n[N/A]### Annotations#### Annotation process\n\n\nNone#### Who are the annotators?\n\n\n[N/A]" ]