id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
rcds/swiss_judgment_prediction
false
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "language:fr", "language:it", "language:en", "license:cc-by-sa-4.0", "judgement-prediction", "arxiv:2110.00806", "arxiv:2209.12325" ]
Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
1,306
7
tab_fact
false
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1909.02164" ]
The problem of verifying whether a textual hypothesis holds the truth based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are restricted to dealing with unstructured textual evidence (e.g., sentences and passages, a pool of passages), while verification using structured forms of evidence, such as tables, graphs, and databases, remains unexplored. TABFACT is large scale dataset with 16k Wikipedia tables as evidence for 118k human annotated statements designed for fact verification with semi-structured evidence. The statements are labeled as either ENTAILED or REFUTED. TABFACT is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning.
1,457
1
tamilmixsentiment
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:ta", "license:unknown" ]
The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. Train: 11,335 Validation: 1,260 and Test: 3,149. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between.
266
0
tanzil
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:am", "language:ar", "language:az", "language:bg", "language:bn", "language:bs", "language:cs", "language:de", "language:dv", "language:en", "language:es", "language:fa", "language:fr", "language:ha", "language:hi", "language:id", "language:it", "language:ja", "language:ko", "language:ku", "language:ml", "language:ms", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:sd", "language:so", "language:sq", "language:sv", "language:sw", "language:ta", "language:tg", "language:th", "language:tr", "language:tt", "language:ug", "language:ur", "language:uz", "language:zh", "license:unknown" ]
This is a collection of Quran translations compiled by the Tanzil project The translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher. If you are using more than three of the following translations in a website or application, we require you to put a link back to this page to make sure that subsequent users have access to the latest updates. 42 languages, 878 bitexts total number of files: 105 total number of tokens: 22.33M total number of sentence fragments: 1.01M
806
2
tapaco
false
[ "task_categories:text2text-generation", "task_categories:translation", "task_categories:text-classification", "task_ids:semantic-similarity-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:extended|other-tatoeba", "language:af", "language:ar", "language:az", "language:be", "language:ber", "language:bg", "language:bn", "language:br", "language:ca", "language:cbk", "language:cmn", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fi", "language:fr", "language:gl", "language:gos", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:ia", "language:id", "language:ie", "language:io", "language:is", "language:it", "language:ja", "language:jbo", "language:kab", "language:ko", "language:kw", "language:la", "language:lfn", "language:lt", "language:mk", "language:mr", "language:nb", "language:nds", "language:nl", "language:orv", "language:ota", "language:pes", "language:pl", "language:pt", "language:rn", "language:ro", "language:ru", "language:sl", "language:sr", "language:sv", "language:tk", "language:tl", "language:tlh", "language:tok", "language:tr", "language:tt", "language:ug", "language:uk", "language:ur", "language:vi", "language:vo", "language:war", "language:wuu", "language:yue", "license:cc-by-2.0", "paraphrase-generation" ]
A freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim is to provide example sentences and translations for particular linguistic constructions and words. The paraphrase corpus is created by populating a graph with Tatoeba sentences and equivalence links between sentences “meaning the same thing”. This graph is then traversed to extract sets of paraphrases. Several language-independent filters and pruning steps are applied to remove uninteresting sentences. A manual evaluation performed on three languages shows that between half and three quarters of inferred paraphrases are correct and that most remaining ones are either correct but trivial, or near-paraphrases that neutralize a morphological distinction. The corpus contains a total of 1.9 million sentences, with 200 – 250 000 sentences per language. It covers a range of languages for which, to our knowledge,no other paraphrase dataset exists.
10,149
23
tashkeela
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ar", "license:gpl-2.0", "diacritics-prediction" ]
Arabic vocalized texts. it contains 75 million of fully vocalized words mainly97 books from classical and modern Arabic language.
276
0
taskmaster1
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1909.05358" ]
Taskmaster-1 is a goal-oriented conversational dataset. It includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves.
579
1
taskmaster2
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1909.05358" ]
Taskmaster is dataset for goal oriented conversations. The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains which include restaurants, food ordering, movies, hotels, flights, music and sports. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface.
1,827
1
taskmaster3
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1909.05358" ]
Taskmaster is dataset for goal oriented conversations. The Taskmaster-3 dataset consists of 23,757 movie ticketing dialogs. By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection was created using the "self-dialog" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent.
380
0
tatoeba
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ab", "language:acm", "language:ady", "language:af", "language:afb", "language:afh", "language:aii", "language:ain", "language:ajp", "language:akl", "language:aln", "language:am", "language:an", "language:ang", "language:aoz", "language:apc", "language:ar", "language:arq", "language:ary", "language:arz", "language:as", "language:ast", "language:avk", "language:awa", "language:ayl", "language:az", "language:ba", "language:bal", "language:bar", "language:be", "language:ber", "language:bg", "language:bho", "language:bjn", "language:bm", "language:bn", "language:bo", "language:br", "language:brx", "language:bs", "language:bua", "language:bvy", "language:bzt", "language:ca", "language:cay", "language:cbk", "language:ce", "language:ceb", "language:ch", "language:chg", "language:chn", "language:cho", "language:chr", "language:cjy", "language:ckb", "language:ckt", "language:cmn", "language:co", "language:code", "language:cpi", "language:crh", "language:crk", "language:cs", "language:csb", "language:cv", "language:cy", "language:da", "language:de", "language:dng", "language:drt", "language:dsb", "language:dtp", "language:dv", "language:dws", "language:ee", "language:egl", "language:el", "language:emx", "language:en", "language:enm", "language:eo", "language:es", "language:et", "language:eu", "language:ext", "language:fi", "language:fj", "language:fkv", "language:fo", "language:fr", "language:frm", "language:fro", "language:frr", "language:fuc", "language:fur", "language:fuv", "language:fy", "language:ga", "language:gag", "language:gan", "language:gbm", "language:gcf", "language:gd", "language:gil", "language:gl", "language:gn", "language:gom", "language:gos", "language:got", "language:grc", "language:gsw", "language:gu", "language:gv", "language:ha", "language:hak", "language:haw", "language:hbo", "language:he", "language:hi", "language:hif", "language:hil", "language:hnj", "language:hoc", "language:hr", "language:hrx", "language:hsb", "language:hsn", "language:ht", "language:hu", "language:hy", "language:ia", "language:iba", "language:id", "language:ie", "language:ig", "language:ii", "language:ike", "language:ilo", "language:io", "language:is", "language:it", "language:izh", "language:ja", "language:jam", "language:jbo", "language:jdt", "language:jpa", "language:jv", "language:ka", "language:kaa", "language:kab", "language:kam", "language:kek", "language:kha", "language:kjh", "language:kk", "language:kl", "language:km", "language:kmr", "language:kn", "language:ko", "language:koi", "language:kpv", "language:krc", "language:krl", "language:ksh", "language:ku", "language:kum", "language:kw", "language:kxi", "language:ky", "language:la", "language:laa", "language:lad", "language:lb", "language:ldn", "language:lfn", "language:lg", "language:lij", "language:liv", "language:lkt", "language:lld", "language:lmo", "language:ln", "language:lo", "language:lt", "language:ltg", "language:lut", "language:lv", "language:lzh", "language:lzz", "language:mad", "language:mai", "language:max", "language:mdf", "language:mfe", "language:mg", "language:mgm", "language:mh", "language:mhr", "language:mi", "language:mic", "language:min", "language:mk", "language:ml", "language:mn", "language:mni", "language:mnw", "language:moh", "language:mr", "language:mt", "language:mvv", "language:mwl", "language:mww", "language:my", "language:myv", "language:na", "language:nah", "language:nan", "language:nb", "language:nch", "language:nds", "language:ngt", "language:ngu", "language:niu", "language:nl", "language:nlv", "language:nn", "language:nog", "language:non", "language:nov", "language:npi", "language:nst", "language:nus", "language:nv", "language:ny", "language:nys", "language:oar", "language:oc", "language:ofs", "language:ood", "language:or", "language:orv", "language:os", "language:osp", "language:ota", "language:otk", "language:pa", "language:pag", "language:pal", "language:pam", "language:pap", "language:pau", "language:pcd", "language:pdc", "language:pes", "language:phn", "language:pi", "language:pl", "language:pms", "language:pnb", "language:ppl", "language:prg", "language:ps", "language:pt", "language:qu", "language:quc", "language:qya", "language:rap", "language:rif", "language:rm", "language:rn", "language:ro", "language:rom", "language:ru", "language:rue", "language:rw", "language:sa", "language:sah", "language:sc", "language:scn", "language:sco", "language:sd", "language:sdh", "language:se", "language:sg", "language:sgs", "language:shs", "language:shy", "language:si", "language:sjn", "language:sl", "language:sm", "language:sma", "language:sn", "language:so", "language:sq", "language:sr", "language:stq", "language:su", "language:sux", "language:sv", "language:swg", "language:swh", "language:syc", "language:ta", "language:te", "language:tet", "language:tg", "language:th", "language:thv", "language:ti", "language:tig", "language:tk", "language:tl", "language:tlh", "language:tly", "language:tmr", "language:tmw", "language:tn", "language:to", "language:toi", "language:tok", "language:tpi", "language:tpw", "language:tr", "language:ts", "language:tt", "language:tts", "language:tvl", "language:ty", "language:tyv", "language:tzl", "language:udm", "language:ug", "language:uk", "language:umb", "language:ur", "language:uz", "language:vec", "language:vep", "language:vi", "language:vo", "language:vro", "language:wa", "language:war", "language:wo", "language:wuu", "language:xal", "language:xh", "language:xqa", "language:yi", "language:yo", "language:yue", "language:zlm", "language:zsm", "language:zu", "language:zza", "license:cc-by-2.0" ]
This is a collection of translated sentences from Tatoeba 359 languages, 3,403 bitexts total number of files: 750 total number of tokens: 65.54M total number of sentence fragments: 8.96M
1,224
7
ted_hrlr
false
[ "task_categories:translation", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:extended|ted_talks_iwslt", "language:az", "language:be", "language:en", "language:es", "language:fr", "language:gl", "language:he", "language:it", "language:pt", "language:ru", "language:tr", "license:cc-by-nc-nd-4.0" ]
Data sets derived from TED talk transcripts for comparing similar language pairs where one is high resource and the other is low resource.
1,974
0
ted_iwlst2013
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "language:de", "language:en", "language:es", "language:fa", "language:fr", "language:it", "language:nl", "language:pl", "language:pt", "language:ro", "language:ru", "language:sl", "language:tr", "language:zh", "license:unknown" ]
A parallel corpus of TED talk subtitles provided by CASMACAT: http://www.casmacat.eu/corpus/ted2013.html. The files are originally provided by https://wit3.fbk.eu. 15 languages, 14 bitexts total number of files: 28 total number of tokens: 67.67M total number of sentence fragments: 3.81M
2,036
0
ted_multi
false
[]
Massively multilingual (60 language) data set derived from TED Talk transcripts. Each record consists of parallel arrays of language and text. Missing and incomplete translations will be filtered out.
323
2
ted_talks_iwslt
false
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:af", "language:am", "language:ar", "language:arq", "language:art", "language:as", "language:ast", "language:az", "language:be", "language:bg", "language:bi", "language:bn", "language:bo", "language:bs", "language:ca", "language:ceb", "language:cnh", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fil", "language:fr", "language:ga", "language:gl", "language:gu", "language:ha", "language:he", "language:hi", "language:hr", "language:ht", "language:hu", "language:hup", "language:hy", "language:id", "language:ig", "language:inh", "language:is", "language:it", "language:ja", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:la", "language:lb", "language:lo", "language:lt", "language:ltg", "language:lv", "language:mg", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:nb", "language:ne", "language:nl", "language:nn", "language:oc", "language:pa", "language:pl", "language:ps", "language:pt", "language:ro", "language:ru", "language:rup", "language:sh", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sr", "language:sv", "language:sw", "language:szl", "language:ta", "language:te", "language:tg", "language:th", "language:tl", "language:tlh", "language:tr", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:vi", "language:zh", "license:cc-by-nc-nd-4.0" ]
The core of WIT3 is the TED Talks corpus, that basically redistributes the original content published by the TED Conference website (http://www.ted.com). Since 2007, the TED Conference, based in California, has been posting all video recordings of its talks together with subtitles in English and their translations in more than 80 languages. Aside from its cultural and social relevance, this content, which is published under the Creative Commons BYNC-ND license, also represents a precious language resource for the machine translation research community, thanks to its size, variety of topics, and covered languages. This effort repurposes the original content in a way which is more convenient for machine translation researchers.
3,029
3
telugu_books
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:te", "license:unknown" ]
This dataset is created by scraping telugu novels from teluguone.com this dataset can be used for nlp tasks like topic modeling, word embeddings, transfer learning etc
266
0
telugu_news
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:multi-class-classification", "task_ids:topic-classification", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:te", "license:unknown" ]
This dataset contains Telugu language news articles along with respective topic labels (business, editorial, entertainment, nation, sport) extracted from the daily Andhra Jyoti. This dataset could be used to build Classification and Language Models.
267
0
tep_en_fa_para
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:fa", "license:unknown" ]
TEP: Tehran English-Persian parallel corpus. The first free Eng-Per corpus, provided by the Natural Language and Text Processing Laboratory, University of Tehran.
266
1
text2log
false
[ "task_categories:translation", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown" ]
The dataset contains about 100,000 simple English sentences selected and filtered from enTenTen15 and their translation into First Order Logic (FOL) Lambda Dependency-based Compositional Semantics using ccg2lambda.
272
1
thai_toxicity_tweet
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:th", "license:cc-by-nc-3.0" ]
Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
285
2
thainer
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-tirasaroj-aroonmanakun", "language:th", "license:cc-by-3.0" ]
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
266
1
thaiqa_squad
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-thaiqa", "language:th", "license:cc-by-nc-sa-3.0" ]
`thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
522
1
thaisum
false
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:th", "license:mit" ]
ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists.
608
3
EleutherAI/the_pile
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:other", "arxiv:2101.00027" ]
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together.
3,943
103
the_pile_books3
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:mit", "arxiv:2101.00027" ]
This dataset is Shawn Presser's work and is part of EleutherAi/The Pile dataset. This dataset contains all of bibliotik in plain .txt form, aka 197,000 books processed in exactly the same way as did for bookcorpusopen (a.k.a. books1). seems to be similar to OpenAI's mysterious "books2" dataset referenced in their papers. Unfortunately OpenAI will not give details, so we know very little about any differences. People suspect it's "all of libgen", but it's purely conjecture.
1,879
12
the_pile_openwebtext2
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:text-scoring", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:mit", "arxiv:2101.00027" ]
OpenWebText2 is part of EleutherAi/The Pile dataset and is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released.
710
6
the_pile_stack_exchange
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:2101.00027" ]
This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, which is an anonymized dump of all user-contributed content on the Stack Exchange network.
280
3
tilde_model
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hr", "language:hu", "language:is", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:no", "language:pl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sl", "language:sq", "language:sr", "language:sv", "language:tr", "language:uk", "license:cc-by-sa-4.0" ]
This is the Tilde MODEL Corpus – Multilingual Open Data for European Languages. The data has been collected from sites allowing free use and reuse of its content, as well as from Public Sector web sites. The activities have been undertaken as part of the ODINE Open Data Incubator for Europe, which aims to support the next generation of digital businesses and fast-track the development of new products and services. The corpus includes the following parts: Tilde MODEL - EESC is a multilingual corpus compiled from document texts of European Economic and Social Committee document portal. Source: http://dm.eesc.europa.eu/ Tilde MODEL - RAPID multilingual parallel corpus is compiled from all press releases of Press Release Database of European Commission released between 1975 and end of 2016 as available from http://europa.eu/rapid/ Tilde MODEL - ECB multilingual parallel corpus is compiled from the multilingual pages of European Central Bank web site http://ebc.europa.eu/ Tilde MODEL - EMA is a corpus compiled from texts of European Medicines Agency document portal as available in http://www.ema.europa.eu/ at the end of 2016 Tilde MODEL - World Bank is a corpus compiled from texts of World Bank as available in http://www.worldbank.org/ in 2017 Tilde MODEL - AirBaltic.com Travel Destinations is a multilingual parallel corpus compiled from description texts of AirBaltic.com travel destinations as available in https://www.airbaltic.com/en/destinations/ in 2017 Tilde MODEL - LiveRiga.com is a multilingual parallel corpus compiled from Riga tourist attractions description texts of http://liveriga.com/ web site in 2017 Tilde MODEL - Lithuanian National Philharmonic Society is a parallel corpus compiled from texts of Lithuanian National Philharmonic Society web site http://www.filharmonija.lt/ in 2017 Tilde MODEL - mupa.hu is a parallel corpus from texts of Müpa Budapest - web site of Hungarian national culture house and concert venue https://www.mupa.hu/en/ compiled in spring of 2017 Tilde MODEL - fold.lv is a parallel corpus from texts of fold.lv portal http://www.fold.lv/en/ of the best of Latvian and foreign creative industries as compiled in spring of 2017 Tilde MODEL - czechtourism.com is a multilingual parallel corpus from texts of http://czechtourism.com/ portal compiled in spring of 2017 30 languages, 274 bitexts total number of files: 125 total number of tokens: 1.43G total number of sentence fragments: 62.44M
839
1
time_dial
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "dialog-act-classification", "arxiv:2106.04571" ]
TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the DailyDialog (Li et al., 2017), which is a multi-turn dialog corpus. In order to establish strong baselines and provide information on future model development, we conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these questions (97.8%), the best T5 model variant struggles on this challenge set (73%). Moreover, our qualitative error analyses show that the models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context.
274
1
times_of_india_news_headlines
false
[ "task_categories:text2text-generation", "task_categories:text-retrieval", "task_ids:document-retrieval", "task_ids:fact-checking-retrieval", "task_ids:text-simplification", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0" ]
This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of english articles published per day. Due to the heavy daily volume over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time. It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets.
269
0
timit_asr
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:other" ]
The TIMIT corpus of reading speech has been developed to provide speech data for acoustic-phonetic research studies and for the evaluation of automatic speech recognition systems. TIMIT contains high quality recordings of 630 individuals/speakers with 8 different American English dialects, with each individual reading upto 10 phonetically rich sentences. More info on TIMIT dataset can be understood from the "README" which can be found here: https://catalog.ldc.upenn.edu/docs/LDC93S1/readme.txt
1,602
11
tiny_shakespeare
false
[]
40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/. To use for e.g. character modelling: ``` d = datasets.load_dataset(name='tiny_shakespeare')['train'] d = d.map(lambda x: datasets.Value('strings').unicode_split(x['text'], 'UTF-8')) # train split includes vocabulary for other splits vocabulary = sorted(set(next(iter(d)).numpy())) d = d.map(lambda x: {'cur_char': x[:-1], 'next_char': x[1:]}) d = d.unbatch() seq_len = 100 batch_size = 2 d = d.batch(seq_len) d = d.batch(batch_size) ```
2,874
6
tlc
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:th", "license:unknown" ]
Thai Literature Corpora (TLC): Corpora of machine-ingestible Thai classical literature texts. Release: 6/25/19 It consists of two datasets: ## TLC set It is texts from [Vajirayana Digital Library](https://vajirayana.org/), stored by chapters and stanzas (non-tokenized). tlc v.2.0 (6/17/19 : a total of 34 documents, 292,270 lines, 31,790,734 characters) tlc v.1.0 (6/11/19 : a total of 25 documents, 113,981 lines, 28,775,761 characters) ## TNHC set It is texts from Thai National Historical Corpus, stored by lines (manually tokenized). tnhc v.1.0 (6/25/19 : a total of 47 documents, 756,478 lines, 13,361,142 characters)
527
0
tmu_gfm_dataset
false
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "grammatical-error-correction" ]
A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in Yoshimura et al. (2020).
1,091
1
told-br
false
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pt", "license:cc-by-sa-4.0", "hate-speech-detection", "arxiv:2010.04543" ]
ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia,Xenophobia, Obscene, Insult, Misogyny and Racism.
410
1
totto
false
[ "task_categories:table-to-text", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "arxiv:2004.14373" ]
ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
781
2
trec
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown" ]
The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set. The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700. Data are collected from four sources: 4,500 English questions published by USC (Hovy et al., 2001), about 500 manually constructed questions for a few rare classes, 894 TREC 8 and TREC 9 questions, and also 500 questions from TREC 10 which serves as the test set. These questions were manually labeled.
30,014
25
trivia_qa
false
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:open-domain-qa", "task_ids:open-domain-abstractive-qa", "task_ids:extractive-qa", "task_ids:abstractive-qa", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "arxiv:1705.03551" ]
TriviaqQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaqQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions.
18,500
18
tsac
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:aeb", "license:lgpl-3.0" ]
Tunisian Sentiment Analysis Corpus. About 17k user comments manually annotated to positive and negative polarities. This corpus is collected from Facebook users comments written on official pages of Tunisian radios and TV channels namely Mosaique FM, JawhraFM, Shemes FM, HiwarElttounsi TV and Nessma TV. The corpus is collected from a period spanning January 2015 until June 2016.
267
0
ttc4900
false
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:tr", "license:unknown", "news-category-classification" ]
The data set is taken from kemik group http://www.kemik.yildiz.edu.tr/ The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth. We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/10.1177/0165551515620551 If you use the dataset in a paper, please refer https://www.kaggle.com/savasy/ttc4900 as footnote and cite one of the papers as follows: - A Comparison of Different Approaches to Document Representation in Turkish Language, SDU Journal of Natural and Applied Science, Vol 22, Issue 2, 2018 - A comparative analysis of text classification for Turkish language, Pamukkale University Journal of Engineering Science Volume 25 Issue 5, 2018 - A Knowledge-poor Approach to Turkish Text Categorization with a Comparative Analysis, Proceedings of CICLING 2014, Springer LNCS, Nepal, 2014.
265
2
tunizi
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:aeb", "license:unknown", "arxiv:2004.14303" ]
On social media, Arabic speakers tend to express themselves in their own local dialect. To do so, Tunisians use "Tunisian Arabizi", which consists in supplementing numerals to the Latin script rather than the Arabic alphabet. TUNIZI is the first Tunisian Arabizi Dataset including 3K sentences, balanced, covering different topics, preprocessed and annotated as positive and negative.
269
0
tuple_ie
false
[ "task_categories:other", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "open-information-extraction" ]
The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries. This dataset contains 156K sentences collected for 4th grade questions and 107K sentences for 8th grade questions. Each sentence is followed by the Open IE v4 tuples using their simple format.
527
1
turk
false
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:gpl-3.0" ]
TURKCorpus is a dataset for evaluating sentence simplification systems that focus on lexical paraphrasing, as described in "Optimizing Statistical Machine Translation for Text Simplification". The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 8 times by different annotators.
1,196
2
turkic_xwmt
false
[ "task_categories:translation", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:translation", "size_categories:n<1K", "source_datasets:extended|WMT 2020 News Translation Task", "language:az", "language:ba", "language:en", "language:kaa", "language:kk", "language:ky", "language:ru", "language:sah", "language:tr", "language:uz", "license:mit", "arxiv:2109.04593" ]
A Large-Scale Study of Machine Translation in Turkic Languages
11,939
8
turkish_movie_sentiment
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:tr", "license:unknown" ]
This data set is a dataset from kaggle consisting of Turkish movie reviews and scored between 0-5.
271
3
turkish_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:tr", "license:cc-by-4.0", "arxiv:1702.02363" ]
Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. The authors constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains.
265
2
turkish_product_reviews
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:tr", "license:unknown" ]
Turkish Product Reviews. This repository contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews.
286
3
turkish_shrinked_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other-turkish_ner", "language:tr", "license:cc-by-4.0" ]
Shrinked version (48 entity type) of the turkish_ner. Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains. Shrinked entity types are: academic, academic_person, aircraft, album_person, anatomy, animal, architect_person, capital, chemical, clothes, country, culture, currency, date, food, genre, government, government_person, language, location, material, measure, medical, military, military_person, nation, newspaper, organization, organization_person, person, production_art_music, production_art_music_person, quantity, religion, science, shape, ship, software, space, space_person, sport, sport_name, sport_person, structure, subject, tech, train, vehicle
265
1
turku_ner_corpus
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fi", "license:cc-by-nc-sa-4.0" ]
An open, broad-coverage corpus for Finnish named entity recognition presented in Luoma et al. (2020) A Broad-coverage Corpus for Finnish Named Entity Recognition.
270
0
tweet_eval
false
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:extended|other-tweet-datasets", "language:en", "license:unknown", "arxiv:2010.12421" ]
TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
30,146
54
tweet_qa
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:1907.06292" ]
TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing.
555
2
tweets_ar_en_parallel
false
[ "task_categories:translation", "annotations_creators:expert-generated", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "language:en", "license:apache-2.0", "tweets-translation" ]
Twitter users often post parallel tweets—tweets that contain the same content but are written in different languages. Parallel tweets can be an important resource for developing machine translation (MT) systems among other natural language processing (NLP) tasks. This resource is a result of a generic method for collecting parallel tweets. Using the method, we compiled a bilingual corpus of English-Arabic parallel tweets and a list of Twitter accounts who post English-Arabic tweets regularly. Additionally, we annotate a subset of Twitter accounts with their countries of origin and topic of interest, which provides insights about the population who post parallel tweets.
541
3
tweets_hate_speech_detection
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:gpl-3.0" ]
The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets. Formally, given a training sample of tweets and labels, where label ‘1’ denotes the tweet is racist/sexist and label ‘0’ denotes the tweet is not racist/sexist, your objective is to predict the labels on the given test dataset.
554
8
twi_text_c3
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:tw", "license:cc-by-nc-4.0" ]
Twi Text C3 is the largest Twi texts collected and used to train FastText embeddings in the YorubaTwi Embedding paper: https://www.aclweb.org/anthology/2020.lrec-1.335/
268
0
twi_wordsim353
false
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:en", "language:tw", "license:unknown" ]
A translation of the word pair similarity dataset wordsim-353 to Twi. The dataset was presented in the paper Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yorùbá and Twi (LREC 2020).
265
1
tydiqa
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:extended|wikipedia", "language:ar", "language:bn", "language:en", "language:fi", "language:id", "language:ja", "language:ko", "language:ru", "language:sw", "language:te", "language:th", "license:apache-2.0" ]
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).
4,043
4
ubuntu_dialogs_corpus
false
[ "task_categories:conversational", "task_ids:dialogue-generation", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:unknown", "arxiv:1506.08909" ]
Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter.
441
8
udhr
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:aa", "language:ab", "language:ace", "language:acu", "language:ada", "language:ady", "language:af", "language:agr", "language:aii", "language:ajg", "language:als", "language:alt", "language:am", "language:amc", "language:ame", "language:ami", "language:amr", "language:ar", "language:arl", "language:arn", "language:ast", "language:auc", "language:ay", "language:az", "language:ban", "language:bax", "language:bba", "language:bci", "language:be", "language:bem", "language:bfa", "language:bg", "language:bho", "language:bi", "language:bik", "language:bin", "language:blt", "language:bm", "language:bn", "language:bo", "language:boa", "language:br", "language:bs", "language:buc", "language:bug", "language:bum", "language:ca", "language:cab", "language:cak", "language:cbi", "language:cbr", "language:cbs", "language:cbt", "language:cbu", "language:ccp", "language:ceb", "language:cfm", "language:ch", "language:chj", "language:chk", "language:chr", "language:cic", "language:cjk", "language:cjs", "language:cjy", "language:ckb", "language:cnh", "language:cni", "language:cnr", "language:co", "language:cof", "language:cot", "language:cpu", "language:crh", "language:cri", "language:crs", "language:cs", "language:csa", "language:csw", "language:ctd", "language:cy", "language:da", "language:dag", "language:ddn", "language:de", "language:dga", "language:dip", "language:duu", "language:dv", "language:dyo", "language:dyu", "language:dz", "language:ee", "language:el", "language:en", "language:eo", "language:es", "language:ese", "language:et", "language:eu", "language:eve", "language:evn", "language:fa", "language:fat", "language:fi", "language:fj", "language:fkv", "language:fo", "language:fon", "language:fr", "language:fuf", "language:fur", "language:fuv", "language:fvr", "language:fy", "language:ga", "language:gaa", "language:gag", "language:gan", "language:gd", "language:gjn", "language:gkp", "language:gl", "language:gld", "language:gn", "language:gsw", "language:gu", "language:guc", "language:guu", "language:gv", "language:gyr", "language:ha", "language:hak", "language:haw", "language:he", "language:hi", "language:hil", "language:hlt", "language:hmn", "language:hms", "language:hna", "language:hni", "language:hnj", "language:hns", "language:hr", "language:hsb", "language:hsn", "language:ht", "language:hu", "language:hus", "language:huu", "language:hy", "language:ia", "language:ibb", "language:id", "language:idu", "language:ig", "language:ii", "language:ijs", "language:ilo", "language:io", "language:is", "language:it", "language:iu", "language:ja", "language:jiv", "language:jv", "language:ka", "language:kaa", "language:kbd", "language:kbp", "language:kde", "language:kdh", "language:kea", "language:kek", "language:kg", "language:kha", "language:kjh", "language:kk", "language:kkh", "language:kl", "language:km", "language:kmb", "language:kn", "language:ko", "language:koi", "language:koo", "language:kqn", "language:kqs", "language:kr", "language:kri", "language:krl", "language:ktu", "language:ku", "language:kwi", "language:ky", "language:la", "language:lad", "language:lah", "language:lb", "language:lg", "language:lia", "language:lij", "language:lld", "language:ln", "language:lns", "language:lo", "language:lob", "language:lot", "language:loz", "language:lt", "language:lua", "language:lue", "language:lun", "language:lus", "language:lv", "language:mad", "language:mag", "language:mai", "language:mam", "language:man", "language:maz", "language:mcd", "language:mcf", "language:men", "language:mfq", "language:mg", "language:mh", "language:mi", "language:mic", "language:min", "language:miq", "language:mk", "language:ml", "language:mn", "language:mnw", "language:mor", "language:mos", "language:mr", "language:mt", "language:mto", "language:mxi", "language:mxv", "language:my", "language:mzi", "language:nan", "language:nb", "language:nba", "language:nds", "language:ne", "language:ng", "language:nhn", "language:nio", "language:niu", "language:niv", "language:njo", "language:nku", "language:nl", "language:nn", "language:not", "language:nr", "language:nso", "language:nv", "language:ny", "language:nym", "language:nyn", "language:nzi", "language:oaa", "language:oc", "language:ojb", "language:oki", "language:om", "language:orh", "language:os", "language:ote", "language:pa", "language:pam", "language:pap", "language:pau", "language:pbb", "language:pcd", "language:pcm", "language:pis", "language:piu", "language:pl", "language:pon", "language:pov", "language:ppl", "language:prq", "language:ps", "language:pt", "language:qu", "language:quc", "language:qug", "language:quh", "language:quy", "language:qva", "language:qvc", "language:qvh", "language:qvm", "language:qvn", "language:qwh", "language:qxn", "language:qxu", "language:rar", "language:rgn", "language:rm", "language:rmn", "language:rn", "language:ro", "language:ru", "language:rup", "language:rw", "language:sa", "language:sah", "language:sc", "language:sco", "language:se", "language:sey", "language:sg", "language:shk", "language:shn", "language:shp", "language:si", "language:sk", "language:skr", "language:sl", "language:slr", "language:sm", "language:sn", "language:snk", "language:snn", "language:so", "language:sr", "language:srr", "language:ss", "language:st", "language:su", "language:suk", "language:sus", "language:sv", "language:sw", "language:swb", "language:ta", "language:taj", "language:tbz", "language:tca", "language:tdt", "language:te", "language:tem", "language:tet", "language:tg", "language:th", "language:ti", "language:tiv", "language:tk", "language:tl", "language:tly", "language:tn", "language:to", "language:tob", "language:toi", "language:toj", "language:top", "language:tpi", "language:tr", "language:ts", "language:tsz", "language:tt", "language:tw", "language:ty", "language:tyv", "language:tzh", "language:tzm", "language:tzo", "language:udu", "language:ug", "language:uk", "language:umb", "language:und", "language:ur", "language:ura", "language:uz", "language:vai", "language:ve", "language:vec", "language:vep", "language:vi", "language:vmw", "language:wa", "language:war", "language:wo", "language:wuu", "language:wwa", "language:xh", "language:xsm", "language:yad", "language:yao", "language:yap", "language:yi", "language:ykg", "language:yo", "language:yrk", "language:yua", "language:yue", "language:za", "language:zam", "language:zdj", "language:zgh", "language:zh", "language:zlm", "language:zro", "language:ztu", "language:zu", "license:unknown" ]
The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgrounds from all regions of the world, it set out, for the first time, fundamental human rights to be universally protected. The Declaration was adopted by the UN General Assembly in Paris on 10 December 1948 during its 183rd plenary meeting. The dataset includes translations of the document in 464+ languages and dialects. © 1996 – 2009 The Office of the High Commissioner for Human Rights This plain text version prepared by the “UDHR in Unicode” project, https://www.unicode.org/udhr.
282
0
um005
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:ur", "license:unknown" ]
UMC005 English-Urdu is a parallel corpus of texts in English and Urdu language with sentence alignments. The corpus can be used for experiments with statistical machine translation. The texts come from four different sources: - Quran - Bible - Penn Treebank (Wall Street Journal) - Emille corpus The authors provide the religious texts of Quran and Bible for direct download. Because of licensing reasons, Penn and Emille texts cannot be redistributed freely. However, if you already hold a license for the original corpora, we are able to provide scripts that will recreate our data on your disk. Our modifications include but are not limited to the following: - Correction of Urdu translations and manual sentence alignment of the Emille texts. - Manually corrected sentence alignment of the other corpora. - Our data split (training-development-test) so that our published experiments can be reproduced. - Tokenization (optional, but needed to reproduce our experiments). - Normalization (optional) of e.g. European vs. Urdu numerals, European vs. Urdu punctuation, removal of Urdu diacritics.
527
0
un_ga
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:en", "language:es", "language:fr", "language:ru", "language:zh", "license:unknown" ]
United nations general assembly resolutions: A six-language parallel corpus. This is a collection of translated documents from the United Nations originally compiled into a translation memory by Alexandre Rafalovitch, Robert Dale (see http://uncorpora.org). 6 languages, 15 bitexts total number of files: 6 total number of tokens: 18.87M total number of sentence fragments: 0.44M
2,099
0
un_multi
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "language:de", "language:en", "language:es", "language:fr", "language:ru", "language:zh", "license:unknown" ]
This is a collection of translated documents from the United Nations. This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language
3,113
1
un_pc
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:ar", "language:en", "language:es", "language:fr", "language:ru", "language:zh", "license:unknown" ]
This parallel corpus consists of manually translated UN documents from the last 25 years (1990 to 2014) for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish.
2,182
2
universal_dependencies
false
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:af", "language:aii", "language:ajp", "language:akk", "language:am", "language:apu", "language:aqz", "language:ar", "language:be", "language:bg", "language:bho", "language:bm", "language:br", "language:bxr", "language:ca", "language:ckt", "language:cop", "language:cs", "language:cu", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fo", "language:fr", "language:fro", "language:ga", "language:gd", "language:gl", "language:got", "language:grc", "language:gsw", "language:gun", "language:gv", "language:he", "language:hi", "language:hr", "language:hsb", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:ja", "language:kfm", "language:kk", "language:kmr", "language:ko", "language:koi", "language:kpv", "language:krl", "language:la", "language:lt", "language:lv", "language:lzh", "language:mdf", "language:mr", "language:mt", "language:myu", "language:myv", "language:nl", "language:no", "language:nyq", "language:olo", "language:orv", "language:otk", "language:pcm", "language:pl", "language:pt", "language:ro", "language:ru", "language:sa", "language:sk", "language:sl", "language:sme", "language:sms", "language:soj", "language:sq", "language:sr", "language:sv", "language:swl", "language:ta", "language:te", "language:th", "language:tl", "language:tpn", "language:tr", "language:ug", "language:uk", "language:ur", "language:vi", "language:wbp", "language:wo", "language:yo", "language:yue", "language:zh", "license:unknown", "constituency-parsing", "dependency-parsing" ]
Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal) Stanford dependencies (de Marneffe et al., 2006, 2008, 2014), Google universal part-of-speech tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets (Zeman, 2008).
11,241
9
universal_morphologies
false
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:ady", "language:ang", "language:ar", "language:arn", "language:ast", "language:az", "language:ba", "language:be", "language:bg", "language:bn", "language:bo", "language:br", "language:ca", "language:ckb", "language:crh", "language:cs", "language:csb", "language:cu", "language:cy", "language:da", "language:de", "language:dsb", "language:el", "language:en", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fo", "language:fr", "language:frm", "language:fro", "language:frr", "language:fur", "language:fy", "language:ga", "language:gal", "language:gd", "language:gmh", "language:gml", "language:got", "language:grc", "language:gv", "language:hai", "language:he", "language:hi", "language:hu", "language:hy", "language:is", "language:it", "language:izh", "language:ka", "language:kbd", "language:kjh", "language:kk", "language:kl", "language:klr", "language:kmr", "language:kn", "language:krl", "language:kw", "language:la", "language:liv", "language:lld", "language:lt", "language:lud", "language:lv", "language:mk", "language:mt", "language:mwf", "language:nap", "language:nb", "language:nds", "language:nl", "language:nn", "language:nv", "language:oc", "language:olo", "language:osx", "language:pl", "language:ps", "language:pt", "language:qu", "language:ro", "language:ru", "language:sa", "language:sga", "language:sh", "language:sl", "language:sme", "language:sq", "language:sv", "language:swc", "language:syc", "language:te", "language:tg", "language:tk", "language:tr", "language:tt", "language:uk", "language:ur", "language:uz", "language:vec", "language:vep", "language:vot", "language:xcl", "language:xno", "language:yi", "language:zu", "license:cc-by-sa-3.0", "morphology" ]
The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages. The goal of UniMorph is to annotate morphological data in a universal schema that allows an inflected word from any language to be defined by its lexical meaning, typically carried by the lemma, and by a rendering of its inflectional form in terms of a bundle of morphological features from our schema. The specification of the schema is described in Sylak-Glassman (2016).
15,193
5
urdu_fake_news
false
[ "task_categories:text-classification", "task_ids:fact-checking", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ur", "license:unknown" ]
Urdu fake news datasets that contain news of 5 different news domains. These domains are Sports, Health, Technology, Entertainment, and Business. The real news are collected by combining manual approaches.
437
0
urdu_sentiment_corpus
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ur", "license:unknown" ]
“Urdu Sentiment Corpus” (USC) shares the dat of Urdu tweets for the sentiment analysis and polarity detection. The dataset is consisting of tweets and overall, the dataset is comprising over 17, 185 tokens with 52% records as positive, and 48 % records as negative.
267
1
vctk
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
The CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents.
306
4
vivos
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:vi", "license:cc-by-nc-sa-4.0" ]
\ VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition task. The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of. We publish this corpus in hope to attract more scientists to solve Vietnamese speech recognition problems.
314
2
web_nlg
false
[ "task_categories:tabular-to-text", "task_ids:rdf-to-text", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-db_pedia", "source_datasets:original", "language:en", "language:ru", "license:cc-by-sa-3.0", "license:cc-by-nc-sa-4.0", "license:gfdl" ]
The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b). a. (John_E_Blaha birthDate 1942_08_26) (John_E_Blaha birthPlace San_Antonio) (John_E_Blaha occupation Fighter_pilot) b. John E Blaha, born in San Antonio on 1942-08-26, worked as a fighter pilot As the example illustrates, the task involves specific NLG subtasks such as sentence segmentation (how to chunk the input data into sentences), lexicalisation (of the DBpedia properties), aggregation (how to avoid repetitions) and surface realisation (how to build a syntactically correct and natural sounding text).
2,813
7
web_of_science
false
[ "language:en" ]
The Web Of Science (WOS) dataset is a collection of data of published papers available from the Web of Science. WOS has been released in three versions: WOS-46985, WOS-11967 and WOS-5736. WOS-46985 is the full dataset. WOS-11967 and WOS-5736 are two subsets of WOS-46985.
586
2
web_questions
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown" ]
This dataset consists of 6,642 question/answer pairs. The questions are supposed to be answerable by Freebase, a large knowledge graph. The questions are mostly centered around a single named entity. The questions are popular ones asked on the web (at least in 2013).
6,509
5
weibo_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:zh", "license:unknown" ]
Tags: PER(人名), LOC(地点名), GPE(行政区名), ORG(机构名) Label Tag Meaning PER PER.NAM 名字(张三) PER.NOM 代称、类别名(穷人) LOC LOC.NAM 特指名称(紫玉山庄) LOC.NOM 泛称(大峡谷、宾馆) GPE GPE.NAM 行政区的名称(北京) ORG ORG.NAM 特定机构名称(通惠医院) ORG.NOM 泛指名称、统称(文艺公司)
796
5
wi_locness
false
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "multilinguality:other-language-learner", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:other", "grammatical-error-correction" ]
Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native English students with their writing. Specifically, students from around the world submit letters, stories, articles and essays in response to various prompts, and the W&I system provides instant feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these submissions and assigned them a CEFR level.
406
5
wider_face
false
[ "task_categories:object-detection", "task_ids:face-detection", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-wider", "language:en", "license:cc-by-nc-nd-4.0", "arxiv:1511.06523" ]
WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate.
461
7
wiki40b
false
[ "language:en" ]
Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.
6,237
3
wiki_asp
false
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "aspect-based-summarization", "arxiv:2011.07832" ]
WikiAsp is a multi-domain, aspect-based summarization dataset in the encyclopedic domain. In this task, models are asked to summarize cited reference documents of a Wikipedia article into aspect-based summaries. Each of the 20 domains include 10 domain-specific pre-defined aspects.
3,041
2
wiki_atomic_edits
false
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:it", "language:ja", "language:ru", "language:zh", "license:cc-by-sa-4.0" ]
A dataset of atomic wikipedia edits containing insertions and deletions of a contiguous chunk of text in a sentence. This dataset contains ~43 million edits across 8 languages. An atomic edit is defined as an edit e applied to a natural language expression S as the insertion, deletion, or substitution of a sub-expression P such that both the original expression S and the resulting expression e(S) are well-formed semantic constituents (MacCartney, 2009). In this corpus, we release such atomic insertions and deletions made to sentences in wikipedia.
2,245
8
wiki_auto
false
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other-wikipedia", "language:en", "license:cc-by-sa-3.0", "arxiv:2005.02324" ]
WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config), then trained a neural CRF system to predict these alignments. The trained model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto`, `auto_acl`, `auto_full_no_split`, and `auto_full_with_split` configs here).
996
5
wiki_bio
false
[ "task_categories:table-to-text", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "arxiv:1603.07771" ]
This dataset gathers 728,321 biographies from wikipedia. It aims at evaluating text generation algorithms. For each article, we provide the first paragraph and the infobox (both tokenized). For each article, we extracted the first paragraph (text), the infobox (structured data). Each infobox is encoded as a list of (field name, field value) pairs. We used Stanford CoreNLP (http://stanfordnlp.github.io/CoreNLP/) to preprocess the data, i.e. we broke the text into sentences and tokenized both the text and the field values. The dataset was randomly split in three subsets train (80%), valid (10%), test (10%).
6,617
4
wiki_dpr
false
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "license:gfdl", "text-search", "arxiv:2004.04906" ]
This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model. It contains 21M passages from wikipedia along with their DPR embeddings. The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages.
6,942
11
wiki_hop
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "multi-hop", "arxiv:1710.06481" ]
WikiHop is open-domain and based on Wikipedia articles; the goal is to recover Wikidata information by hopping through documents. The goal is to answer text understanding queries by combining multiple facts that are spread across different documents.
12,807
1
wiki_lingua
false
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "language:cs", "language:de", "language:en", "language:es", "language:fr", "language:hi", "language:id", "language:it", "language:ja", "language:ko", "language:nl", "language:pt", "language:ru", "language:th", "language:tr", "language:vi", "language:zh", "license:cc-by-3.0", "arxiv:2010.03093" ]
WikiLingua is a large-scale multilingual dataset for the evaluation of crosslingual abstractive summarization systems. The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages was done by aligning the images that are used to describe each how-to step in an article.
2,811
6
wiki_movies
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-3.0", "arxiv:1606.03126" ]
The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb).
286
0
wiki_qa
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other" ]
Wiki Question Answering corpus from Microsoft
22,675
7
wiki_qa_ar
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:unknown" ]
Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus
261
0
wiki_snippets
false
[ "task_categories:text-generation", "task_categories:other", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:extended|wiki40b", "source_datasets:extended|wikipedia", "language:en", "license:unknown", "text-search" ]
Wikipedia version split into plain text snippets for dense semantic indexing.
900
0
wiki_source
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:sv", "license:unknown" ]
2 languages, total number of files: 132 total number of tokens: 1.80M total number of sentence fragments: 78.36k
266
0
wiki_split
false
[ "task_categories:text2text-generation", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "split-and-rephrase", "arxiv:1808.09468" ]
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
910
1
wiki_summary
false
[ "task_categories:text2text-generation", "task_categories:translation", "task_categories:question-answering", "task_categories:summarization", "task_ids:abstractive-qa", "task_ids:explanation-generation", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:open-domain-abstractive-qa", "task_ids:text-simplification", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fa", "license:apache-2.0" ]
\ The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT.
397
3
wikiann
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:ace", "language:af", "language:als", "language:am", "language:an", "language:ang", "language:ar", "language:arc", "language:arz", "language:as", "language:ast", "language:ay", "language:az", "language:ba", "language:bar", "language:be", "language:bg", "language:bh", "language:bn", "language:bo", "language:br", "language:bs", "language:ca", "language:cbk", "language:cdo", "language:ce", "language:ceb", "language:ckb", "language:co", "language:crh", "language:cs", "language:csb", "language:cv", "language:cy", "language:da", "language:de", "language:diq", "language:dv", "language:el", "language:eml", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:ext", "language:fa", "language:fi", "language:fo", "language:fr", "language:frr", "language:fur", "language:fy", "language:ga", "language:gan", "language:gd", "language:gl", "language:gn", "language:gu", "language:hak", "language:he", "language:hi", "language:hr", "language:hsb", "language:hu", "language:hy", "language:ia", "language:id", "language:ig", "language:ilo", "language:io", "language:is", "language:it", "language:ja", "language:jbo", "language:jv", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:ksh", "language:ku", "language:ky", "language:la", "language:lb", "language:li", "language:lij", "language:lmo", "language:ln", "language:lt", "language:lv", "language:lzh", "language:mg", "language:mhr", "language:mi", "language:min", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:mwl", "language:my", "language:mzn", "language:nan", "language:nap", "language:nds", "language:ne", "language:nl", "language:nn", "language:no", "language:nov", "language:oc", "language:or", "language:os", "language:pa", "language:pdc", "language:pl", "language:pms", "language:pnb", "language:ps", "language:pt", "language:qu", "language:rm", "language:ro", "language:ru", "language:rw", "language:sa", "language:sah", "language:scn", "language:sco", "language:sd", "language:sgs", "language:sh", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sr", "language:su", "language:sv", "language:sw", "language:szl", "language:ta", "language:te", "language:tg", "language:th", "language:tk", "language:tl", "language:tr", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:vec", "language:vep", "language:vi", "language:vls", "language:vo", "language:vro", "language:wa", "language:war", "language:wuu", "language:xmf", "language:yi", "language:yo", "language:yue", "language:zea", "language:zh", "license:unknown", "arxiv:1902.00193" ]
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 176 of the 282 languages from the original WikiANN corpus.
160,320
31
wikicorpus
false
[ "task_categories:fill-mask", "task_categories:text-classification", "task_categories:text-generation", "task_categories:token-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:part-of-speech", "annotations_creators:machine-generated", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "language:en", "language:es", "license:gfdl", "word-sense-disambiguation", "lemmatization" ]
The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words.
1,679
3
wikihow
false
[]
WikiHow is a new large-scale dataset using the online WikiHow (http://www.wikihow.com/) knowledge base. There are two features: - text: wikihow answers texts. - headline: bold lines as summary. There are two separate versions: - all: consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries. - sep: consisting of each paragraph and its summary. Download "wikihowAll.csv" and "wikihowSep.csv" from https://github.com/mahnazkoupaee/WikiHow-Dataset and place them in manual folder https://www.tensorflow.org/datasets/api_docs/python/tfds/download/DownloadConfig. Train/validation/test splits are provided by the authors. Preprocessing is applied to remove short articles (abstract length < 0.75 article length) and clean up extra commas.
5,260
1
wikipedia
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:original", "language:aa", "language:ab", "language:ace", "language:af", "language:ak", "language:als", "language:am", "language:an", "language:ang", "language:ar", "language:arc", "language:arz", "language:as", "language:ast", "language:atj", "language:av", "language:ay", "language:az", "language:azb", "language:ba", "language:bar", "language:bcl", "language:be", "language:bg", "language:bh", "language:bi", "language:bjn", "language:bm", "language:bn", "language:bo", "language:bpy", "language:br", "language:bs", "language:bug", "language:bxr", "language:ca", "language:cbk", "language:cdo", "language:ce", "language:ceb", "language:ch", "language:cho", "language:chr", "language:chy", "language:ckb", "language:co", "language:cr", "language:crh", "language:cs", "language:csb", "language:cu", "language:cv", "language:cy", "language:da", "language:de", "language:din", "language:diq", "language:dsb", "language:dty", "language:dv", "language:dz", "language:ee", "language:el", "language:eml", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:ext", "language:fa", "language:ff", "language:fi", "language:fj", "language:fo", "language:fr", "language:frp", "language:frr", "language:fur", "language:fy", "language:ga", "language:gag", "language:gan", "language:gd", "language:gl", "language:glk", "language:gn", "language:gom", "language:gor", "language:got", "language:gu", "language:gv", "language:ha", "language:hak", "language:haw", "language:he", "language:hi", "language:hif", "language:ho", "language:hr", "language:hsb", "language:ht", "language:hu", "language:hy", "language:ia", "language:id", "language:ie", "language:ig", "language:ii", "language:ik", "language:ilo", "language:inh", "language:io", "language:is", "language:it", "language:iu", "language:ja", "language:jam", "language:jbo", "language:jv", "language:ka", "language:kaa", "language:kab", "language:kbd", "language:kbp", "language:kg", "language:ki", "language:kj", "language:kk", "language:kl", "language:km", "language:kn", "language:ko", "language:koi", "language:krc", "language:ks", "language:ksh", "language:ku", "language:kv", "language:kw", "language:ky", "language:la", "language:lad", "language:lb", "language:lbe", "language:lez", "language:lfn", "language:lg", "language:li", "language:lij", "language:lmo", "language:ln", "language:lo", "language:lrc", "language:lt", "language:ltg", "language:lv", "language:lzh", "language:mai", "language:mdf", "language:mg", "language:mh", "language:mhr", "language:mi", "language:min", "language:mk", "language:ml", "language:mn", "language:mr", "language:mrj", "language:ms", "language:mt", "language:mus", "language:mwl", "language:my", "language:myv", "language:mzn", "language:na", "language:nah", "language:nan", "language:nap", "language:nds", "language:ne", "language:new", "language:ng", "language:nl", "language:nn", "language:no", "language:nov", "language:nrf", "language:nso", "language:nv", "language:ny", "language:oc", "language:olo", "language:om", "language:or", "language:os", "language:pa", "language:pag", "language:pam", "language:pap", "language:pcd", "language:pdc", "language:pfl", "language:pi", "language:pih", "language:pl", "language:pms", "language:pnb", "language:pnt", "language:ps", "language:pt", "language:qu", "language:rm", "language:rmy", "language:rn", "language:ro", "language:ru", "language:rue", "language:rup", "language:rw", "language:sa", "language:sah", "language:sat", "language:sc", "language:scn", "language:sco", "language:sd", "language:se", "language:sg", "language:sgs", "language:sh", "language:si", "language:sk", "language:sl", "language:sm", "language:sn", "language:so", "language:sq", "language:sr", "language:srn", "language:ss", "language:st", "language:stq", "language:su", "language:sv", "language:sw", "language:szl", "language:ta", "language:tcy", "language:tdt", "language:te", "language:tg", "language:th", "language:ti", "language:tk", "language:tl", "language:tn", "language:to", "language:tpi", "language:tr", "language:ts", "language:tt", "language:tum", "language:tw", "language:ty", "language:tyv", "language:udm", "language:ug", "language:uk", "language:ur", "language:uz", "language:ve", "language:vec", "language:vep", "language:vi", "language:vls", "language:vo", "language:vro", "language:wa", "language:war", "language:wo", "language:wuu", "language:xal", "language:xh", "language:xmf", "language:yi", "language:yo", "language:yue", "language:za", "language:zea", "language:zh", "language:zu", "license:cc-by-sa-3.0", "license:gfdl" ]
Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).
40,599
142
wikisql
false
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "text-to-sql", "arxiv:1709.00103" ]
A large crowd-sourced dataset for developing natural language interfaces for relational databases
5,139
21
wikitext
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "license:gfdl", "arxiv:1609.07843" ]
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
275,669
116
wikitext_tl39
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:fil", "language:tl", "license:gpl-3.0", "arxiv:1907.00409" ]
Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means "Tagalog." Originally published in Cruz & Cheng (2019).
269
0
wili_2018
false
[ "task_categories:text-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ace", "language:af", "language:als", "language:am", "language:an", "language:ang", "language:ar", "language:arz", "language:as", "language:ast", "language:av", "language:ay", "language:az", "language:azb", "language:ba", "language:bar", "language:bcl", "language:be", "language:bg", "language:bho", "language:bjn", "language:bn", "language:bo", "language:bpy", "language:br", "language:bs", "language:bxr", "language:ca", "language:cbk", "language:cdo", "language:ce", "language:ceb", "language:chr", "language:ckb", "language:co", "language:crh", "language:cs", "language:csb", "language:cv", "language:cy", "language:da", "language:de", "language:diq", "language:dsb", "language:dty", "language:dv", "language:egl", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:ext", "language:fa", "language:fi", "language:fo", "language:fr", "language:frp", "language:fur", "language:fy", "language:ga", "language:gag", "language:gd", "language:gl", "language:glk", "language:gn", "language:gu", "language:gv", "language:ha", "language:hak", "language:he", "language:hi", "language:hif", "language:hr", "language:hsb", "language:ht", "language:hu", "language:hy", "language:ia", "language:id", "language:ie", "language:ig", "language:ilo", "language:io", "language:is", "language:it", "language:ja", "language:jam", "language:jbo", "language:jv", "language:ka", "language:kaa", "language:kab", "language:kbd", "language:kk", "language:km", "language:kn", "language:ko", "language:koi", "language:kok", "language:krc", "language:ksh", "language:ku", "language:kv", "language:kw", "language:ky", "language:la", "language:lad", "language:lb", "language:lez", "language:lg", "language:li", "language:lij", "language:lmo", "language:ln", "language:lo", "language:lrc", "language:lt", "language:ltg", "language:lv", "language:lzh", "language:mai", "language:map", "language:mdf", "language:mg", "language:mhr", "language:mi", "language:min", "language:mk", "language:ml", "language:mn", "language:mr", "language:mrj", "language:ms", "language:mt", "language:mwl", "language:my", "language:myv", "language:mzn", "language:nan", "language:nap", "language:nb", "language:nci", "language:nds", "language:ne", "language:new", "language:nl", "language:nn", "language:nrm", "language:nso", "language:nv", "language:oc", "language:olo", "language:om", "language:or", "language:os", "language:pa", "language:pag", "language:pam", "language:pap", "language:pcd", "language:pdc", "language:pfl", "language:pl", "language:pnb", "language:ps", "language:pt", "language:qu", "language:rm", "language:ro", "language:roa", "language:ru", "language:rue", "language:rup", "language:rw", "language:sa", "language:sah", "language:sc", "language:scn", "language:sco", "language:sd", "language:sgs", "language:sh", "language:si", "language:sk", "language:sl", "language:sme", "language:sn", "language:so", "language:sq", "language:sr", "language:srn", "language:stq", "language:su", "language:sv", "language:sw", "language:szl", "language:ta", "language:tcy", "language:te", "language:tet", "language:tg", "language:th", "language:tk", "language:tl", "language:tn", "language:to", "language:tr", "language:tt", "language:tyv", "language:udm", "language:ug", "language:uk", "language:ur", "language:uz", "language:vec", "language:vep", "language:vi", "language:vls", "language:vo", "language:vro", "language:wa", "language:war", "language:wo", "language:wuu", "language:xh", "language:xmf", "language:yi", "language:yo", "language:zea", "language:zh", "license:odbl", "language-identification", "arxiv:1801.07779" ]
It is a benchmark dataset for language identification and contains 235000 paragraphs of 235 languages
274
0
wino_bias
false
[ "task_categories:token-classification", "task_ids:coreference-resolution", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "arxiv:1804.06876" ]
WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias. The corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter).
67,474
9
winograd_wsc
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-coreference-resolution", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from a well-known example by Terry Winograd: > The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they'' presumably refers to the demonstrators.
2,812
2
winogrande
false
[ "language:en" ]
WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern 2011), but adjusted to improve the scale and robustness against the dataset-specific bias. Formulated as a fill-in-a-blank task with binary options, the goal is to choose the right option for a given sentence which requires commonsense reasoning.
50,900
11
wiqa
false
[ "language:en" ]
The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph. The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions.
10,825
1