id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
Fraser/dream-coder
false
[ "language:en", "license:mit", "program-synthesis" ]
null
267
0
Fraser/python-lines
false
[]
Dataset of single lines of Python code taken from the [CodeSearchNet](https://github.com/github/CodeSearchNet) dataset. Context This dataset allows checking the validity of Variational-Autoencoder latent spaces by testing what percentage of random/intermediate latent points can be greedily decoded into valid Python code. Content Each row has a parsable line of source code. {'text': '{python source code line}'} Most lines are < 100 characters while all are under 125 characters. Contains 2.6 million lines. All code is in parsable into a python3 ast.
267
1
Fraser/python-state-changes
false
[ "language:code" ]
Python state changes from a single line of code.
413
3
Fraser/short-jokes
false
[]
Copy of [Kaggle dataset](https://www.kaggle.com/abhinavmoudgil95/short-jokes), adding to Huggingface for ease of use. Description from Kaggle: Context Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes. Visit my Github repository for more information regarding collection of data and the scripts used. Content This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke. Disclaimer It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people.
424
2
Fraser/wiki_sentences
false
[]
null
277
0
GEM/ART
false
[ "task_categories:other", "annotations_creators:automatically-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0", "reasoning", "arxiv:1908.05739", "arxiv:1906.05317" ]
the Abductive Natural Language Generation Dataset from AI2
279
2
GEM/BiSECT
false
[ "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:de", "language:en", "language:fr", "language:es", "license:other" ]
BiSECT is a Split and Rephrase corpus created via bilingual pivoting.
746
1
GEM/CrossWOZ
false
[ "task_categories:conversational", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:zh", "license:apache-2.0", "dialog-response-generation" ]
CrossWOZ is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides.
269
4
GEM/OrangeSum
false
[ "task_categories:summarization", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:fr", "license:other" ]
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous. Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.
401
0
GEM/RiSAWOZ
false
[ "task_categories:conversational", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:zh", "license:cc-by-4.0", "dialog-response-generation" ]
RiSAWOZ contains 11.2K human-to-human (H2H) multiturn semantically annotated dialogues, with more than 150K utterances spanning over 12 domains, which is larger than all previous annotated H2H conversational datasets.Both single- and multi-domain dialogues are constructed, accounting for 65% and 35%, respectively.
275
2
GEM/RotoWire_English-German
false
[ "task_categories:table-to-text", "annotations_creators:automatically-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "language:de", "license:cc-by-4.0", "data-to-text" ]
Dataset for the WNGT 2019 DGT shared task on "Document-Level Generation and Translation”.
265
1
GEM/SIMPITIKI
false
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:it", "license:cc-by-4.0" ]
SIMPITIKI is a Simplification corpus for Italian and it consists of two sets of simplified pairs: the first one is harvested from the Italian Wikipedia in a semi-automatic way; the second one is manually annotated sentence-by-sentence from documents in the administrative domain.
133
2
GEM/SciDuet
false
[ "task_categories:other", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0", "text-to-slide" ]
SciDuet is the first publicaly available dataset for the challenging task of document2slides generation, The dataset integrated into GEM is the ACL portion of the whole dataset described in "https://aclanthology.org/2021.naacl-main.111.pdf". It contains the full Dev and Test sets, and a portion of the Train dataset. We additionally create a challenge dataset in which the slide titles do not match with the section headers of the corresponding paper. Note that although we cannot release the whole training dataset due to copyright issues, researchers can still use our released data procurement code from https://github.com/IBM/document2slides to generate the training dataset from the online ICML/NeurIPS anthologies. In the released dataset, the original papers and slides (both are in PDF format) are carefully processed by a combination of PDF/Image processing tookits. The text contents from multiple slides that correspond to the same slide title are mreged.
264
1
GEM/Taskmaster
false
[ "task_categories:conversational", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0", "dialog-response-generation", "arxiv:2012.12458" ]
The Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs (located in Taskmaster/TM-3-2020/data/). By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction. The columns are gem_id, 0, 1 for serial numbering, 2 for the text dialog and id for the default id by the authors.
268
1
GEM/cochrane-simplification
false
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon.
271
0
GEM/common_gen
false
[ "task_categories:other", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:mit", "reasoning", "arxiv:1911.03705", "arxiv:1910.13461", "arxiv:2009.12677", "arxiv:2012.00366", "arxiv:1910.10683", "arxiv:2006.08315" ]
CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts.
264
0
GEM/conversational_weather
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "data-to-text" ]
The Conversational Weather dataset is designed for generation of responses to weather queries based on a structured input data. The input allows specifying data attributes such as dates, times, locations, weather conditions, and errors, and also offers control over structure of response through discourse relations such as join, contrast, and justification.
403
0
GEM/cs_restaurants
false
[ "task_categories:conversational", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:cs", "license:cc-by-sa-4.0", "dialog-response-generation" ]
The task is generating responses in the context of a (hypothetical) dialogue system that provides information about restaurants. The input is a basic intent/dialogue act type and a list of slots (attributes) and their values. The output is a natural language sentence.
265
1
GEM/dart
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:mit", "data-to-text", "arxiv:1910.13461", "arxiv:1908.09022", "arxiv:2007.02871", "arxiv:1709.00103", "arxiv:1706.09254", "arxiv:1810.01170" ]
DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology. It consists of 82191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of table schema, annotated with sentence description that covers all facts in the triple set.
269
0
GEM/dstc10_track2_task2
false
[ "task_categories:conversational", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0", "dialog-response-generation" ]
\
269
1
GEM/e2e_nlg
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "data-to-text" ]
The E2E dataset is designed for a limited-domain data-to-text task -- generation of restaurant descriptions/recommendations based on up to 8 different attributes (name, area, price range etc.).
277
2
GEM/mlb_data_to_text
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:other", "data-to-text" ]
The MLB dataset for data to text generation contains Major League Baseball games statistics and their human-written summaries.
291
1
GEM/mlsum
false
[ "task_categories:summarization", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:de", "language:es", "license:other" ]
This is the MLSUM subset of the GEM benchmark. MLSUM is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
404
1
GEM/opusparcus
false
[ "task_categories:other", "annotations_creators:expert-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:de", "language:en", "language:fi", "language:fr", "language:ru", "language:sv", "license:cc-by-nc-4.0", "paraphrasing" ]
Opusparcus is a paraphrase corpus for six European languages: German, English, Finnish, French, Russian, and Swedish. The paraphrases are extracted from the OpenSubtitles2016 corpus, which contains subtitles from movies and TV shows.
7,289
0
GEM/references
false
[]
null
265
0
GEM/schema_guided_dialog
false
[ "task_categories:conversational", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "dialog-response-generation", "arxiv:1909.05855", "arxiv:2004.15006", "arxiv:2002.01359" ]
The Schema-Guided Dialogue (SGD) dataset contains 18K multi-domain task-oriented dialogues between a human and a virtual assistant, which covers 17 domains ranging from banks and events to media, calendar, travel, and weather. The language presents in the datset is only English. The SGD dataset provides a challenging testbed for a number of tasks in task-oriented dialogue, including language understanding, slot filling, dialogue state tracking and response generation. For the creation of the SGD dataset, they developed a multi-domain dialogue simulator that generates dialogue outlines over an arbitrary combination of APIs, dialogue states and system actions. Then, they used a crowd-sourcing procedure to paraphrase these outlines to natural language utterances. This novel crowd-sourcing procedure preserves all annotations obtained from the simulator and does not require any extra annotations after dialogue collection.
429
1
GEM/sportsett_basketball
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:mit", "data-to-text" ]
SportSett:Basketball dataset for Data-to-Text Generation contains NBA games stats aligned with their human written summaries.
272
3
GEM/squad_v2
false
[ "task_categories:other", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "question-generation", "arxiv:1806.03822" ]
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
287
0
GEM/surface_realisation_st_2020
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:ar", "language:zh", "language:en", "language:fr", "language:hi", "language:id", "language:ja", "language:ko", "language:pt", "language:ru", "language:es", "license:cc-by-2.5", "data-to-text" ]
null
266
0
GEM/totto
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "data-to-text", "arxiv:1603.07771", "arxiv:2007.02871", "arxiv:2005.10433" ]
ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
318
1
GEM/turku_hockey_data2text
false
[ "task_categories:table-to-text", "annotations_creators:expert-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:fi", "license:cc-by-nc-sa-4.0", "data-to-text" ]
The Turku Hockey Data2Text corpus was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting. This dataset is a collection of 3,454 ice hockey games, each including game statistics and a news article describing the game. Each game includes manual alignment of events (such as goals or penalties) and sentences describing the specific event in natural language extracted from the news article. The corpus includes 12,827 annotated events. The natural language passages are manually curated not to include any information not derivable from the input data or world knowledge.
397
0
GEM/turku_paraphrase_corpus
false
[ "task_categories:other", "annotations_creators:expert-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:fi", "license:cc-by-sa-4.0", "paraphrasing" ]
Turku Paraphrase Corpus is a dataset of 104,645 manually annotated Finnish paraphrases. The vast majority of the data is classified as a paraphrase either in the given context, or universally.
529
0
GEM-submissions/v1-outputs-and-scores
false
[]
null
267
0
GEM/viggo
false
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "data-to-text" ]
ViGGO was designed for the task of data-to-text generation in chatbots (as opposed to task-oriented dialogue systems), with target responses being more conversational than information-seeking, yet constrained to the information presented in a meaning representation. The dataset, being relatively small and clean, can also serve for demonstrating transfer learning capabilities of neural models.
268
0
GEM/web_nlg
false
[ "task_categories:table-to-text", "annotations_creators:unknown", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "data-to-text" ]
WebNLG is a bi-lingual dataset (English, Russian) of parallel DBpedia triple sets and short texts that cover about 450 different DBpedia properties. The WebNLG data was originally created to promote the development of RDF verbalisers able to generate short text and to handle micro-planning (i.e., sentence segmentation and ordering, referring expression generation, aggregation); the goal of the task is to generate texts starting from 1 to 7 input triples which have entities in common (so the input is actually a connected Knowledge Graph). The dataset contains about 17,000 triple sets and 45,000 crowdsourced texts in English, and 7,000 triples sets and 19,000 crowdsourced texts in Russian. A challenging test set section with entities and/or properties that have not been seen at training time is available.
697
1
GEM/wiki_auto_asset_turk
false
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:other", "arxiv:1910.02677", "arxiv:2005.00352" ]
WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the manual config in this version of the dataset), then trained a neural CRF system to predict these alignments. The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the auto and auto_acl configs here).
287
2
GEM/wiki_cat_sum
false
[ "task_categories:summarization", "annotations_creators:automatically-created", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "arxiv:1906.04687", "arxiv:1801.10198", "arxiv:2009.07032" ]
Summarise the most important facts of a given entity in the Film, Company, and Animal domains from a cluster of related documents.
538
2
GEM/wiki_lingua
false
[ "task_categories:summarization", "annotations_creators:none", "language_creators:unknown", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:ar", "language:cs", "language:de", "language:en", "language:es", "language:fr", "language:hi", "language:id", "language:it", "language:ja", "language:ko", "language:nl", "language:pt", "language:ru", "language:th", "language:tr", "language:vi", "language:zh", "license:cc-by-nc-sa-3.0" ]
WikiLingua is a large-scale multilingual dataset for the evaluation of crosslingual abstractive summarization systems. The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages was done by aligning the images that are used to describe each how-to step in an article.
50,083
17
GEM/xlsum
false
[ "task_categories:summarization", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:und", "license:cc-by-nc-sa-4.0", "arxiv:1607.01759" ]
We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 45 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation.
6,950
2
GEM/xsum
false
[ "task_categories:summarization", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-4.0" ]
This is the XSUM subset of the GEM benchmark.
318
0
GEM-submissions/GEM__bart_base_schema_guided_dialog__1645547915
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/Leo__bart-large__1645784880
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/Leo__mbart-large-cc25__1645802644
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1645558682
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1645559101
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1645800191
false
[ "benchmark:gem" ]
null
266
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646049378
false
[ "benchmark:gem" ]
null
265
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646049424
false
[ "benchmark:gem" ]
null
271
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646049601
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646049876
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646050898
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646051364
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1646052073
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
267
0
GEM-submissions/lewtun__this-is-a-test__1646052811
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
267
0
GEM-submissions/lewtun__this-is-a-test__1646230987
false
[ "benchmark:gem", "evaluation", "benchmark" ]
null
267
0
GEM-submissions/ratishsp
false
[ "benchmark:gem" ]
null
267
0
GEM-submissions/submission-scores
false
[]
null
267
0
GV05/shlomit_speech
false
[]
null
265
0
Gabriel/quora_swe
false
[ "task_categories:text-retrieval", "task_categories:text-classification", "task_ids:semantic-similarity-classification", "size_categories:10K<n<100K", "language:sv", "license:mit", "question-pairing", "semantic-search" ]
null
266
0
GalacticAI/Noirset
false
[]
null
135
0
Gauravadlakha1509/new_one
false
[]
null
267
0
GeoffVdr/cv8_trainval_processed
false
[]
null
135
0
GonzaloA/fake_news
false
[]
null
347
4
Graphcore/gqa-lxmert
false
[ "language:en", "license:cc-by-4.0" ]
GQA is a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous visual question answering (VQA) datasets.
268
0
Graphcore/gqa
false
[ "language:en", "license:cc-by-4.0" ]
GQA is a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous visual question answering (VQA) datasets.
277
0
Graphcore/vqa-lxmert
false
[ "language:en", "license:cc-by-4.0" ]
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
267
0
Graphcore/vqa
false
[ "language:en", "license:cc-by-4.0" ]
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
290
0
Graphcore/wikipedia-bert-128
false
[ "language:en", "license:cc-by-sa-3.0" ]
null
272
0
Graphcore/wikipedia-bert-512
false
[ "language:en", "license:cc-by-sa-3.0" ]
null
265
0
GroNLP/ik-nlp-22_pestyle
false
[ "task_categories:translation", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:it", "license:other" ]
This dataset contains a sample of sentences taken from the FLORES-101 dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the PET platform.
265
0
GroNLP/ik-nlp-22_slp
false
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-retrieval", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "question-generation" ]
Paragraphs from the Speech and Language Processing book (3ed) by Jurafsky and Martin extracted semi-automatically from Chapters 2 to 11 of the original book draft.
550
0
GroNLP/ik-nlp-22_transqe
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|esnli", "language:en", "language:nl", "license:apache-2.0", "quality-estimation" ]
The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to include human-annotated natural language explanations of the entailment relations. This version includes an automatic translation to Dutch and two quality estimation annotations for each translated field.
265
0
GroNLP/ik-nlp-22_winemag
false
[ "license:cc-by-sa-4.0" ]
null
269
1
Gwangho/NCBI-Sars-Cov-2
false
[]
null
136
0
HHousen/ParaSCI
false
[ "arxiv:2101.08382" ]
null
269
0
HHousen/msrp
false
[]
null
332
1
HHousen/quora
false
[]
null
270
1
HUPD/hupd
false
[ "task_categories:fill-mask", "task_categories:summarization", "task_categories:text-classification", "task_categories:token-classification", "task_ids:masked-language-modeling", "task_ids:multi-class-classification", "task_ids:topic-classification", "task_ids:named-entity-recognition", "language:en", "license:cc-by-sa-4.0", "patents", "arxiv:2207.04043" ]
The Harvard USPTO Patent Dataset (HUPD) is a large-scale, well-structured, and multi-purpose corpus of English-language patent applications filed to the United States Patent and Trademark Office (USPTO) between 2004 and 2018. With more than 4.5 million patent documents, HUPD is two to three times larger than comparable corpora. Unlike other NLP patent datasets, HUPD contains the inventor-submitted versions of patent applications, not the final versions of granted patents, allowing us to study patentability at the time of filing using NLP methods for the first time.
470
10
Halilyesilceng/autonlp-data-nameEntityRecognition
false
[]
null
135
0
HarleyQ/WitcherDialogue
false
[]
null
135
0
HarrisDePerceptron/sv_corpora_parliament_processed
false
[]
null
267
0
HarrisDePerceptron/ur_corpora_pib
false
[]
null
267
0
Harveenchadha/bol-models
false
[]
null
135
0
HarveyBWest/mybot
false
[]
null
135
0
Hellisotherpeople/DebateSum
false
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:abstractive-qa", "task_ids:document-retrieval", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:mit", "conditional-text-generation", "arxiv:2011.07251" ]
null
289
3
Helsinki-NLP/tatoeba_mt
false
[ "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:translation", "size_categories:unknown", "source_datasets:original", "language:af", "language:ar", "language:az", "language:be", "language:bg", "language:bn", "language:br", "language:bs", "language:ca", "language:ch", "language:cs", "language:cv", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fo", "language:fr", "language:fy", "language:ga", "language:gd", "language:gl", "language:gn", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:ia", "language:id", "language:ie", "language:io", "language:is", "language:it", "language:ja", "language:jv", "language:ka", "language:kk", "language:km", "language:ko", "language:ku", "language:kw", "language:la", "language:lb", "language:lt", "language:lv", "language:mi", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:nb", "language:nl", "language:nn", "language:no", "language:oc", "language:pl", "language:pt", "language:qu", "language:rn", "language:ro", "language:ru", "language:sh", "language:sl", "language:sq", "language:sr", "language:sv", "language:sw", "language:ta", "language:te", "language:th", "language:tk", "language:tl", "language:tr", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:vi", "language:vo", "language:yi", "language:zh", "license:cc-by-2.0" ]
The Tatoeba Translation Challenge is a multilingual data set of machine translation benchmarks derived from user-contributed translations collected by [Tatoeba.org](https://tatoeba.org/) and provided as parallel corpus from [OPUS](https://opus.nlpl.eu/). This dataset includes test and development data sorted by language pair. It includes test sets for hundreds of language pairs and is continuously updated. Please, check the version number tag to refer to the release that your are using.
117,135
24
HenryAI/KerasAPIReference.txt
false
[]
null
267
0
HenryAI/KerasBERTv1-Data
false
[]
null
267
0
HenryAI/KerasCodeExamples.txt
false
[]
null
267
0
HenryAI/KerasDeveloperGuides.txt
false
[]
null
134
0
Huertas97/autonlp-data-mami-semeval-20-21
false
[]
null
266
0
Husain/intent-classification-en-fr
false
[]
null
135
0
IFSTalfredoswald/MBTI
false
[]
null
267
0
Iftoo95/Arabic_Sentiment_and_Topics
false
[]
null
135
0
IlyaGusev/gazeta
false
[ "task_categories:summarization", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ru", "license:unknown", "arxiv:2006.11063" ]
null
864
11
IlyaGusev/headline_cause
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ru", "language:en", "license:cc0-1.0", "causal-reasoning", "arxiv:2108.12626" ]
null
931
2
Intel/WEC-Eng
false
[]
null
267
0
Ishwar/Senti
false
[]
null
267
0
Iskaj/dutch_corpora_parliament_processed
false
[]
null
267
0
JIWON/nil_dataset
false
[]
null
135
0