id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
eugenesiow/Set5
false
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:other", "other-image-super-resolution" ]
Set5 is a evaluation dataset with 5 RGB images for the image super resolution task.
443
0
eugenesiow/Urban100
false
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:cc-by-4.0", "other-image-super-resolution" ]
The Urban100 dataset contains 100 images of urban scenes. It commonly used as a test set to evaluate the performance of super-resolution models.
364
0
evageon/IADD
false
[ "license:cc-by-4.0" ]
null
256
0
facebook/multilingual_librispeech
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:de", "language:nl", "language:fr", "language:it", "language:es", "language:pt", "language:pl", "license:cc-by-4.0", "arxiv:2012.03411" ]
This is a streamable version of the Multilingual LibriSpeech (MLS) dataset. The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream. MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages: English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
1,161
16
fastjt/fasst
false
[ "license:afl-3.0" ]
null
127
0
fatvvs/autonlp-data-entity_model_conll2003
false
[]
null
256
0
fededeleon/CriteriosClasificacion
false
[ "license:mit" ]
null
257
0
fengzhang/fzTestDatasets
false
[]
null
129
0
fhamborg/news_sentiment_newsmtsc
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit" ]
NewsMTSC: A large, manually annotated dataset for target-dependent sentiment classification in English news articles.
553
5
fighterhitx/test
false
[ "license:cc" ]
null
127
0
fihtrotuld/asu
false
[]
null
127
0
flax-community/code_clippy_data
false
[]
null
128
0
flax-community/conceptual-12m-mbart-50-multilingual
false
[]
null
253
0
flax-community/conceptual-12m-multilingual-marian-128
false
[]
null
255
0
flax-community/conceptual-12m-multilingual-marian-es
false
[]
null
287
0
flax-community/conceptual-12m-multilingual-marian
false
[]
null
253
0
flax-community/conceptual-captions-12
false
[]
null
255
1
flax-community/dummy-oscar-als-32
false
[]
null
255
0
flax-community/german-common-voice-processed
false
[]
null
256
1
flax-community/german_common_crawl
false
[]
German Only Extract from Common Crawl This Dataset is for pretraining a German Language Model (Unsupervised) or tune a Multilingual Model specifically to German
635
0
flax-community/multilingual-vqa
false
[]
null
256
0
flax-community/norwegian-clean-dummy
false
[]
null
129
0
flax-community/swahili-safi
false
[]
Cleaned dataset for Swahili Language Modeling
267
3
flax-sentence-embeddings/Gender_Bias_Evaluation_Set
false
[ "arxiv:1906.00591" ]
null
256
2
flax-sentence-embeddings/paws-jsonl
false
[]
null
253
0
flax-sentence-embeddings/stackexchange_math_jsonl
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
549
0
flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
22,652
1
flax-sentence-embeddings/stackexchange_title_body_jsonl
false
[]
null
259
0
flax-sentence-embeddings/stackexchange_titlebody_best_and_down_voted_answer_jsonl
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
23,983
8
flax-sentence-embeddings/stackexchange_titlebody_best_voted_answer_jsonl
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
22,221
3
flax-sentence-embeddings/stackexchange_xml
false
[]
null
129
1
flexthink/librig2p-nostress-space
false
[]
Grapheme-to-Phoneme training, validation and test sets
261
0
flexthink/librig2p-nostress
false
[]
Grapheme-to-Phoneme training, validation and test sets
261
0
flexthink/ljspeech
false
[]
This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
253
1
florentgbelidji/test-3
false
[]
null
129
0
florentgbelidji/test-dataset
false
[]
null
129
0
florianbussmann/FUNSD-vu2020revising
false
[ "multilinguality:monolingual", "language:en", "arxiv:2010.05322" ]
\ FUNSD is one of the limited publicly available datasets for information extraction from document images. The information in the FUNSD dataset is defined by text areas of four categories ("key", "value", "header", "other", and "background") and connectivity between areas as key-value relations. Inspecting FUNSD, we found several inconsistency in labeling, which impeded its applicability to the key-value extraction problem. In this report, we described some labeling issues in FUNSD and the revision we made to the dataset.
129
0
florianbussmann/train_tickets-yu2020pick
false
[]
\ The train ticket is fixed layout dataset, however, it contains background noise and imaging distortions. It contains 1,530 synthetic images and 320 real images for training, and 80 real images for testing. Every train ticket has eight key text fields including ticket number, starting station, train number, destination station, date, ticket rates, seat category, and name. This dataset mainly consists of digits, English characters, and Chinese characters.
128
0
flxclxc/encoded_drug_reviews
false
[]
null
256
2
formermagic/github_python_1m
false
[ "task_ids:language-modeling", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:py", "license:mit" ]
null
130
1
formu/CVT
false
[]
null
128
0
fractalego/QA_to_statements
false
[ "arxiv:1809.02922", "doi:10.57967/hf/0011" ]
null
256
0
frahman/github-issues
false
[]
null
256
0
frtna/deneme
false
[]
null
256
0
frtna/es_it_Results-base-OPUS_Tatoeba
false
[]
null
256
0
frtna/jwt300_mt
false
[]
This new dataset is designed to be used in the scope of machine translation project.
256
0
frtna/opensubtitles_mt
false
[]
This new dataset is designed to be used in the scope of PhD project.
255
0
frtna/sabahaKKarsi
false
[]
null
256
0
frtna/ted_mt
false
[]
This new dataset is designed to be used in the scope of multilingual model project.
256
0
frtna/test
false
[]
null
127
0
frtna/test2
false
[]
null
129
0
fulai/DuReader
false
[]
null
130
0
fuliucansheng/coco
false
[]
null
129
0
fuliucansheng/minicoco
false
[]
MINICOCO2017
353
0
fuliucansheng/mininlp
false
[]
MiniNLP Data
257
0
fuliucansheng/pascal_voc
false
[]
PASCAL_VOC
164
0
fuyun1107/clip-for-vlp
false
[]
null
255
0
fvillena/cantemist
false
[]
\
256
0
fvillena/spanish_diagnostics
false
[]
null
254
0
gabella/demo_data_raw
false
[]
null
252
0
gabtan99/pex-conversations
false
[ "task_ids:dialogue-modeling", "task_ids:language-modeling", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:tl", "language:fil", "license:unknown", "multi-turn" ]
null
255
1
gagan3012/fake-news
false
[]
null
129
0
gagan3012/grover-data
false
[]
null
256
0
gagan3012/vizwiz
false
[ "license:apache-2.0" ]
null
131
0
gar1t/test
false
[]
null
256
0
gayanin/pubmed-gastro-maskfilling
false
[]
null
256
0
gayanin/pubmed-gastro-paraphrasing
false
[]
null
256
2
gayanin/pubmed-gastro-summarisation
false
[]
null
252
0
gcaillaut/citeseer
false
[]
The CiteSeer dataset consists of 3312 scientific publications classified into one of six classes. The citation network consists of 4732 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 3703 unique words. The README file in the dataset provides more details.
256
0
gcaillaut/cora
false
[]
The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words.
257
0
gcaillaut/frwiki_good_pages_el
false
[ "task_categories:other", "annotations_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:fr-FR", "language:fr", "license:wtfpl" ]
French Wikipedia dataset for Entity Linking
254
1
gcaillaut/pubmed
false
[]
The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details.
256
0
geekydevu/mlquestions
false
[]
null
129
0
geninhu/vi_opus100_processed
false
[]
null
256
0
geninhu/vi_vivos-cv-tts-fpt_processed
false
[]
null
257
0
german-nlp-group/german_common_crawl
false
[]
German Only Extract from Common Crawl This Dataset is for pretraining a German Language Model (Unsupervised) or tune a Multilingual Model specifically to German
264
6
gfigueroa/wikitext_processed
false
[]
null
256
0
gfissore/arxiv-abstracts-2021
false
[ "task_categories:summarization", "task_categories:text-retrieval", "task_categories:text2text-generation", "task_ids:explanation-generation", "task_ids:text-simplification", "task_ids:document-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "language:en", "license:cc0-1.0", "arxiv:1905.00075" ]
null
505
6
ghadeermobasher/BC5CDR-Chemical-Disease
false
[]
\
535
3
ghadeermobasher/CRAFT-Chem
false
[]
\
256
0
ghomasHudson/ao3_style_change
false
[]
null
256
0
ghomasHudson/character_id
false
[]
The character types identification dataset consists of movie scripts annotated with character archetypes (Hero, Villain, Mentor, etc.).
256
0
ghomasHudson/hotpotExtended
false
[]
null
255
0
ghomasHudson/long_contra_pro
false
[]
null
256
0
ghomasHudson/muld
false
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_categories:translation", "task_ids:abstractive-qa", "annotations_creators:found", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:translation", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "source_datasets:extended|hotpot_qa", "source_datasets:extended|open_subtitles", "language:en", "language:de", "conditional-text-generation", "arxiv:2202.07362" ]
MuLD: The Multitask Long Document Benchmark A set of NLP tasks where each example is over 10,000 tokens long.
943
3
ghomasHudson/vlsp
false
[ "language:en" ]
Very Long version of the scientific papers summarization dataset. Only includes theses over 10,000 tokens long.
256
0
gigant/african_accented_french
false
[ "task_categories:automatic-speech-recognition", "language:fr", "license:cc" ]
\ This corpus consists of approximately 22 hours of speech recordings. Transcripts are provided for all the recordings. The corpus can be divided into 3 parts: 1. Yaounde Collected by a team from the U.S. Military Academy's Center for Technology Enhanced Language Learning (CTELL) in 2003 in Yaoundé, Cameroon. It has recordings from 84 speakers, 48 male and 36 female. 2. CA16 This part was collected by a RDECOM Science Team who participated in the United Nations exercise Central Accord 16 (CA16) in Libreville, Gabon in June 2016. The Science Team included DARPA's Dr. Boyan Onyshkevich and Dr. Aaron Lawson (SRI International), as well as RDECOM scientists. It has recordings from 125 speakers from Cameroon, Chad, Congo and Gabon. 3. Niger This part was collected from 23 speakers in Niamey, Niger, Oct. 26-30 2015. These speakers were students in a course for officers and sergeants presented by Army trainers assigned to U.S. Army Africa. The data was collected by RDECOM Science & Technology Advisors Major Eddie Strimel and Mr. Bill Bergen.
254
3
gigant/m-ailabs_speech_dataset_fr
false
[ "task_categories:automatic-speech-recognition", "language:fr", "license:cc" ]
\ The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis. Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format. A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian. Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details).
255
0
gigant/ro_corpora_parliament_processed
false
[]
null
252
0
gigant/romanian_speech_synthesis_0_8_1
false
[ "task_categories:automatic-speech-recognition", "language:ro", "license:unknown" ]
\ The Romanian speech synthesis (RSS) corpus was recorded in a hemianechoic chamber (anechoic walls and ceiling; floor partially anechoic) at the University of Edinburgh. We used three high quality studio microphones: a Neumann u89i (large diaphragm condenser), a Sennheiser MKH 800 (small diaphragm condenser with very wide bandwidth) and a DPA 4035 (headset-mounted condenser). Although the current release includes only speech data recorded via Sennheiser MKH 800, we may release speech data recorded via other microphones in the future. All recordings were made at 96 kHz sampling frequency and 24 bits per sample, then downsampled to 48 kHz sampling frequency. For recording, downsampling and bit rate conversion, we used ProTools HD hardware and software. We conducted 8 sessions over the course of a month, recording about 500 sentences in each session. At the start of each session, the speaker listened to a previously recorded sample, in order to attain a similar voice quality and intonation.
267
1
giganticode/java-cmpx-v1
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:monolingual", "size_categories:unknown", "language:java", "license:mit" ]
null
258
0
giganticode/java-cmpx
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:monolingual", "size_categories:unknown", "language:java", "license:mit" ]
null
254
0
gj1997/trial
false
[]
null
256
0
gmnlp/tico19
false
[]
In response to the on-going crisis, several academic (Carnegie Mellon University, George Mason University, Johns Hopkins University) and industry (Amazon, Appen, Facebook, Google, Microsoft, Translated) partners have partnered with the Translators without Borders to prepare COVID-19 materials for a variety of the world’s languages to be used by professional translators and for training state-of-the-art Machine Translation (MT) models. The focus is on making emergency and crisis-related content available in as many languages as possible. The collected, curated and translated content across nearly 90 languages will be available to the professional translation as well the MT research community.
4,994
1
gorkemgoknar/tr_ted_talk_translated
false
[ "language:tr", "license:apache-2.0", "dataset", "turkish", "ted-multi", "cleaned" ]
null
258
1
gpt3mix/rt20
false
[]
null
256
0
gpt3mix/sst2
false
[]
null
1,631
0
gsarti/change_it
false
[ "task_categories:summarization", "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:it", "license:cc-by-nc-sa-4.0", "conditional-text-generation", "style-transfer" ]
The CHANGE-IT dataset contains approximately 152,000 article-headline pairs, collected from two Italian newspapers situated at opposite ends of the political spectrum, namely la Repubblica (left) and Il Giornale (right), with the two newspapers equally represented. The dataset has been used in the context of the CHANGE-IT task (https://sites.google.com/view/change-it) during the Evalita 2020 evaluation campaign (http://www.evalita.it/2020). CHANGE-IT is a generation task for Italian – more specifically, a style transfer task for headlines of Italian newspapers. Given a (collection of) headlines from one newspaper, namely Il Giornale (G) or La Repubblica (R), it challenges automatic systems to change all G-headlines to headlines in style R, and all R-headlines to headlines in style G. Although the task only concerns headline change, the dataset comprehends both the headlines as well as their respective full articles.
383
1
gsarti/clean_mc4_it
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended", "language:it", "license:odc-by", "arxiv:1910.10683", "arxiv:2203.03759" ]
A thoroughly cleaned version of the Italian portion of the multilingual colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning detailed in the repository README file.
803
4
gsarti/flores_101
false
[ "task_categories:text-generation", "task_categories:translation", "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|flores", "language:af", "language:am", "language:ar", "language:hy", "language:as", "language:ast", "language:az", "language:be", "language:bn", "language:bs", "language:bg", "language:my", "language:ca", "language:ceb", "language:zho", "language:hr", "language:cs", "language:da", "language:nl", "language:en", "language:et", "language:tl", "language:fi", "language:fr", "language:ff", "language:gl", "language:lg", "language:ka", "language:de", "language:el", "language:gu", "language:ha", "language:he", "language:hi", "language:hu", "language:is", "language:ig", "language:id", "language:ga", "language:it", "language:ja", "language:jv", "language:kea", "language:kam", "language:kn", "language:kk", "language:km", "language:ko", "language:ky", "language:lo", "language:lv", "language:ln", "language:lt", "language:luo", "language:lb", "language:mk", "language:ms", "language:ml", "language:mt", "language:mi", "language:mr", "language:mn", "language:ne", "language:ns", "language:no", "language:ny", "language:oc", "language:or", "language:om", "language:ps", "language:fa", "language:pl", "language:pt", "language:pa", "language:ro", "language:ru", "language:sr", "language:sn", "language:sd", "language:sk", "language:sl", "language:so", "language:ku", "language:es", "language:sw", "language:sv", "language:tg", "language:ta", "language:te", "language:th", "language:tr", "language:uk", "language:umb", "language:ur", "language:uz", "language:vi", "language:cy", "language:wo", "language:xh", "language:yo", "language:zu", "license:cc-by-sa-4.0", "conditional-text-generation", "arxiv:2106.03193" ]
One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
16,704
6