id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
csikasote/bembaspeech_plus_jw_processed
false
[]
null
264
0
cstrathe435/Task2Dial
false
[]
null
270
0
ctgowrie/chessgames
false
[]
null
135
0
ctu-aic/csfever
false
[ "license:cc-by-sa-3.0", "arxiv:1803.05355", "arxiv:2201.11115" ]
CsFEVER is a Czech localisation of the English FEVER datgaset.
262
1
ctu-aic/csfever_nli
false
[]
CsfeverNLI is a NLI version of the Czech Csfever dataset
265
1
ctu-aic/ctkfacts_nli
false
[ "arxiv:2201.11115" ]
CtkFactsNLI is a NLI version of the Czech CTKFacts dataset
269
1
cyko/books
false
[]
null
134
0
cylee/github-issues
false
[ "arxiv:2005.00614" ]
null
264
0
dalle-mini/YFCC100M_OpenAI_subset
false
[ "arxiv:1503.01817" ]
The YFCC100M is one of the largest publicly and freely useable multimedia collection, containing the metadata of around 99.2 million photos and 0.8 million videos from Flickr, all of which were shared under one of the various Creative Commons licenses. This version is a subset defined in openai/CLIP.
277
5
dalle-mini/open-images
false
[]
null
278
2
dalle-mini/wit
false
[]
null
263
4
damlab/HIV_FLT
false
[]
null
270
0
damlab/HIV_PI
false
[ "license:mit" ]
null
265
0
damlab/HIV_V3_bodysite
false
[]
null
264
0
damlab/HIV_V3_coreceptor
false
[]
null
264
0
dansbecker/hackernews_hiring_posts
false
[]
null
266
0
darentang/generated
false
[]
https://arxiv.org/abs/2103.10213
336
0
darentang/sroie
false
[]
https://arxiv.org/abs/2103.10213
230
0
darkraipro/recipe-instructions
false
[]
null
266
0
dasago78/dasago78dataset
false
[]
null
133
0
dataset/wikipedia_bn
false
[]
Bengali Wikipedia from the dump of 03/20/2021. The data was processed using the huggingface datasets wikipedia script early april 2021. The dataset was built from the Wikipedia dump (https://dumps.wikimedia.org/). Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.).
268
0
davanstrien/19th-century-ads
false
[]
null
264
0
davanstrien/ads-test
false
[]
null
133
0
davanstrien/beyond_test
false
[]
null
264
0
davanstrien/crowdsourced-keywords
false
[]
null
264
0
davanstrien/embellishments-sample
false
[]
null
265
0
davanstrien/embellishments
false
[]
null
263
0
davanstrien/hipe2020
false
[]
null
133
0
davanstrien/iiif_labeled
false
[]
null
133
0
davanstrien/iiif_manuscripts_label_ge_50
false
[]
null
264
0
davanstrien/kitten
false
[]
null
264
0
davanstrien/manuscript_iiif_test
false
[]
null
264
0
BritishLibraryLabs/BookGenreSnorkelAnnotated
false
[]
null
298
0
davanstrien/test_iiif
false
[]
null
264
0
davanstrien/test_push_to_hub_image
false
[]
null
264
0
davanstrien/testpush
false
[]
null
264
0
david-wb/zeshel
false
[]
null
133
0
davidwisdom/reddit-randomness
false
[]
null
264
0
dcfidalgo/test
false
[]
null
264
0
debajyotidatta/biosses
false
[ "license:gpl-3.0" ]
null
131
0
debatelab/aaac
false
[ "task_categories:summarization", "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:parsing", "task_ids:text-simplification", "annotations_creators:machine-generated", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "argument-mining", "conditional-text-generation", "structure-prediction", "arxiv:2110.01509" ]
null
264
1
debatelab/deepa2
false
[ "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:text-simplification", "task_ids:parsing", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "language:en", "license:other", "argument-mining", "summarization", "conditional-text-generation", "structure-prediction", "arxiv:2110.01509" ]
null
263
3
deepset/germandpr
false
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "multilinguality:monolingual", "source_datasets:original", "language:de", "license:cc-by-4.0", "arxiv:2104.12741" ]
We take GermanQuAD as a starting point and add hard negatives from a dump of the full German Wikipedia following the approach of the DPR authors (Karpukhin et al., 2020). The format of the dataset also resembles the one of DPR. GermanDPR comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts.
353
5
deepset/germanquad
false
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "task_ids:open-domain-qa", "multilinguality:monolingual", "source_datasets:original", "language:de", "license:cc-by-4.0", "arxiv:2104.12741" ]
In order to raise the bar for non-English QA, we are releasing a high-quality, human-labeled German QA dataset consisting of 13 722 questions, incl. a three-way annotated test set. The creation of GermanQuAD is inspired by insights from existing datasets as well as our labeling experience from several industry projects. We combine the strengths of SQuAD, such as high out-of-domain performance, with self-sufficient questions that contain all relevant information for open-domain QA as in the NaturalQuestions dataset. Our training and test datasets do not overlap like other popular datasets and include complex questions that cannot be answered with a single entity or only a few words.
594
13
dennlinger/klexikon
false
[ "task_categories:summarization", "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:found", "annotations_creators:expert-generated", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:de", "license:cc-by-sa-4.0", "conditional-text-generation", "simplification", "document-level", "arxiv:2201.07198" ]
null
277
5
dev/untitled_imgs
false
[]
null
132
0
dfgvhxfgv/fghghj
false
[]
null
132
0
DFKI-SLT/few-nerd
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-sa-4.0", "structure-prediction" ]
Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities and 4,601,223 tokens. Three benchmark tasks are built, one is supervised: Few-NERD (SUP) and the other two are few-shot: Few-NERD (INTRA) and Few-NERD (INTER).
1,543
5
DFKI-SLT/mobie
false
[ "task_categories:other", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:cc-by-4.0", "structure-prediction" ]
MobIE is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities. The dataset consists of 3,232 social media texts and traffic reports with 91K tokens, and contains 20.5K annotated entities, 13.1K of which are linked to a knowledge base. A subset of the dataset is human-annotated with seven mobility-related, n-ary relation types, while the remaining documents are annotated using a weakly-supervised labeling approach implemented with the Snorkel framework. The dataset combines annotations for NER, EL and RE, and thus can be used for joint and multi-task learning of these fundamental information extraction tasks.
313
0
dgknrsln/Yorumsepeti
false
[]
null
132
0
diiogo/annotations
false
[]
null
131
0
dispenst/jhghdghfd
false
[]
null
131
0
dispix/test-dataset
false
[]
null
131
0
diwank/hinglish-dump
false
[ "license:mit" ]
Raw merged dump of Hinglish (hi-EN) datasets.
916
1
diwank/silicone-merged
false
[ "license:mit" ]
Merged and simplified dialog act datasets from the silicone collection.
401
1
dk-crazydiv/huggingface-modelhub
false
[]
Metadata information of all the models available on HuggingFace's modelhub
264
2
dlb/plue
false
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:extended|glue", "language:pt", "license:lgpl-3.0", "paraphrase-identification", "qa-nli", "coreference-nli" ]
PLUE: Portuguese Language Understanding Evaluationis a Portuguese translation of the GLUE benchmark and Scitail using OPUS-MT model and Google Cloud Translation.
2,151
4
dongpil/test
false
[]
null
133
0
dragosnicolae555/RoITD
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:ro-RO", "license:cc-by-4.0" ]
null
262
0
dram-conflict/horror-scripts
false
[]
This dataset is designed to generate scripts.
264
0
dvilasuero/ag_news_error_analysis
false
[]
null
264
0
dvilasuero/ag_news_training_set_losses
false
[]
null
264
0
dvilasuero/test-dataset
false
[]
null
262
0
dweb/squad_with_cola_scores
false
[]
null
262
0
dynabench/dynasent
false
[ "arxiv:2012.15349", "arxiv:1803.09010", "arxiv:1810.03993" ]
Dynabench.DynaSent is a Sentiment Analysis dataset collected using a human-and-model-in-the-loop.
4,770
2
dynabench/qa
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:2002.00293", "arxiv:1606.05250" ]
Dynabench.QA is a Reading Comprehension dataset collected using a human-and-model-in-the-loop.
655
1
eason929/test
false
[]
null
132
0
ebrigham/asr_files
false
[]
null
132
0
ebrigham/labels
false
[]
AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang ([email protected]) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
262
0
ebrigham/multi_sentiment
false
[]
null
263
0
echarlaix/gqa-lxmert
false
[ "license:apache-2.0" ]
GQA is a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous visual question answering (VQA) datasets.
264
0
echarlaix/gqa
false
[ "license:apache-2.0" ]
GQA is a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous visual question answering (VQA) datasets.
256
0
echarlaix/vqa-lxmert
false
[ "license:apache-2.0" ]
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
255
0
echarlaix/vqa
false
[ "license:apache-2.0" ]
VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.
272
0
edbeeching/decision_transformer_gym_replay
false
[ "license:apache-2.0", "arxiv:2004.07219" ]
A subset of the D4RL dataset, used for training Decision Transformers
1,649
1
edbeeching/github-issues
false
[]
null
256
0
edfews/szdfcszdf
false
[]
null
128
0
edge2992/github-issues
false
[]
null
254
0
edge2992/rri-short
false
[]
null
254
0
edge2992/rri_short
false
[]
null
129
0
edsas/fgrdtgrdtdr
false
[]
null
129
0
edsas/grttyi
false
[]
null
129
0
ehcalabres/ravdess_speech
false
[ "task_categories:audio-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
null
128
2
ejjaffe/onion_headlines_2_sources
false
[]
null
257
0
eliza-dukim/load_klue_re
false
[]
null
128
0
elonmuskceo/persistent-space-dataset
false
[]
null
253
0
elonmuskceo/wordle
false
[]
null
256
1
elricwan/bert_data
false
[]
null
257
0
emre/Open_SLR108_Turkish_10_hours
false
[ "license:cc-by-4.0", "robust-speech-event", "arxiv:2103.16193" ]
null
255
2
emrecan/stsb-mt-turkish
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "language_creators:machine-generated", "size_categories:1K<n<10K", "source_datasets:extended|other-sts-b", "language:tr" ]
null
1,527
3
enelpol/czywiesz
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:unknown" ]
null
258
0
ervis/aaa
false
[]
null
128
0
ervis/qqq
false
[]
null
128
0
erwanlc/cocktails_recipe
false
[ "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:2M<n<3M", "language:en", "license:other" ]
null
257
1
erwanlc/cocktails_recipe_no_brand
false
[ "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:2M<n<3M", "language:en", "license:other" ]
null
256
1
espejelomar/code_search_net_python_10000_examples
false
[ "license:cc" ]
null
282
4
eugenesiow/BSD100
false
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:other", "image-super-resolution" ]
BSD is a dataset used frequently for image denoising and super-resolution. BSD100 is the testing set of the Berkeley segmentation dataset BSD300.
367
0
eugenesiow/Div2k
false
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:other", "other-image-super-resolution" ]
DIV2K dataset: DIVerse 2K resolution high quality images as used for the challenges @ NTIRE (CVPR 2017 and CVPR 2018) and @ PIRM (ECCV 2018)
2,197
2
eugenesiow/PIRM
false
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:cc-by-nc-sa-4.0", "other-image-super-resolution", "arxiv:1809.07517" ]
The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing. These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc. Images vary in size, and are typically ~300K pixels in resolution. This dataset was first used for evaluating the perceptual quality of super-resolution algorithms in The 2018 PIRM challenge on Perceptual Super-resolution, in conjunction with ECCV 2018.
253
0
eugenesiow/Set14
false
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "license:other", "other-image-super-resolution" ]
Set14 is an evaluation dataset with 14 RGB images for the image super resolution task.
401
0