id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
Lacito/pangloss
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "source_datasets:original", "language:jya", "language:nru", "license:cc-by-nc-sa-4.0" ]
These datasets are extracts from the Pangloss collection and have been preprocessed for ASR experiments in Na and Japhug.
278
2
Binbin/my_dataset
false
[]
null
150
0
BlakesOrb6/Fred-Flintstone
false
[]
null
151
0
Bosio/pacman
false
[]
null
299
0
Bosio/pacman_descriptions
false
[]
null
299
0
BritishLibraryLabs/EThOS-PhD-metadata
false
[ "task_categories:text-classification", "task_categories:fill-mask", "task_ids:multi-label-classification", "task_ids:masked-language-modeling", "multilinguality:monolingual", "language:en" ]
The data in this collection comprises the bibliographic metadata for all UK doctoral theses listed in EThOS, the UK's national thesis service. We estimate the data covers around 98% of all PhDs ever awarded by UK Higher Education institutions, dating back to 1787. Thesis metadata from every PhD-awarding university in the UK is included.
445
1
CAGER/rick
false
[]
null
151
0
CALM/arwiki
false
[ "multilinguality:monolingual", "language:ar", "license:unknown" ]
null
297
1
CAiRE/ASCEND
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:zh", "license:cc-by-sa-4.0", "speech-recognition", "code-switching", "arxiv:2112.06223" ]
ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set.
386
7
CShorten/KerasBERT
false
[]
null
151
2
ChadxxxxHall/Inter-vision
false
[]
null
151
0
Champion/vpc2020_clear_anon_speech
false
[]
null
149
0
Check/a_re_gi
false
[]
null
151
0
Check/region_1
false
[]
null
299
0
Check/region_2
false
[]
null
299
0
Check/region_3
false
[]
null
299
0
Check/region_4
false
[]
null
299
0
Check/region_5
false
[]
null
298
0
Check/region_6
false
[]
null
297
0
Check/region_7
false
[]
null
298
0
Check/region_8
false
[]
null
297
0
Check/region_9
false
[]
null
299
0
Check/regions
false
[]
null
298
0
Check/vverify
false
[]
null
297
0
Cheranga/test
false
[ "license:afl-3.0" ]
null
148
0
ChristophSchuhmann/MS_COCO_2017_URL_TEXT
false
[]
null
337
5
Chun/dataset
false
[]
null
299
0
Chuu/Vhh
false
[]
null
151
0
CodedotAI/code-clippy-tfrecords
false
[]
null
300
0
CodedotAI/code_clippy
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:code", "license:gpl-3.0", "arxiv:2107.03374" ]
This dataset was generated by selecting GitHub repositories from a large collection of repositories. These repositories were collected from https://seart-ghs.si.usi.ch/ and Github portion of [The Pile](https://github.com/EleutherAI/github-downloader) (performed on July 7th, 2021). The goal of this dataset is to provide a training set for pretraining large language models on code data for helping software engineering researchers better understand their impacts on software related tasks such as autocompletion of code. The dataset is split into train, validation, and test splits. There is a version containing duplicates (209GBs compressed) and ones where exact duplicates (132GBs compressed) are removed. Contains mostly JavaScript and Python code, but other programming languages are included as well to various degrees.
152
8
CodedotAI/code_clippy_github
false
[ "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:unknown", "language:code", "license:mit", "arxiv:2107.03374" ]
The Code Clippy dataset consists of various public codebases from GitHub in 22 programming languages with 23 extensions totalling about 16 TB of data when uncompressed. The dataset was created from the public GitHub dataset on Google BiqQuery.
160
9
Crives/haha
false
[]
null
298
1
Cropinky/flatearther
false
[]
null
151
0
Cropinky/rap_lyrics_english
false
[]
null
308
0
Cropinky/wow_fishing_bobber
false
[]
null
298
0
Cyberfish/pos_tagger
false
[]
null
297
0
Cyberfish/text_error_correction
false
[]
null
298
0
CyranoB/polarity
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1509.01626" ]
The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review.
297
1
DDSC/angry-tweets
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0" ]
null
297
1
DDSC/europarl
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:da", "license:cc-by-4.0" ]
null
297
2
DDSC/lcc
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:da", "license:cc-by-4.0" ]
null
309
3
DDSC/partial-danish-gigaword-no-twitter
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:da", "license:cc-by-4.0" ]
null
321
3
DDSC/reddit-da-asr-preprocessed
false
[]
null
299
0
DDSC/reddit-da
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:da", "license:mit" ]
null
297
2
DDSC/twitter-sent
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:cc-by-4.0" ]
null
297
2
DELith/github-issues
false
[]
null
299
0
DSCI511G1/COP26_Energy_Transition_Tweets
false
[]
null
299
1
DanL/scientific-challenges-and-directions-dataset
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "source_datasets:CORD-19", "language:en", "arxiv:2108.13751", "arxiv:2004.10706" ]
null
266
1
Daniele/dante-corpus
false
[]
null
282
0
Darren/data
false
[]
null
135
0
Datatang/accented_english
false
[]
null
267
4
Datatang/accented_mandarin
false
[]
null
267
3
Datatang/chinese_dialect
false
[]
null
268
5
Datatang/mandarin_chinese
false
[]
null
279
5
Datatang/mixed_speech_chinese_english
false
[]
null
265
4
Datatang/multi_language
false
[]
null
266
3
Datatang/multi_language_conversation
false
[]
null
267
5
Davlan/conll2003_de_noMISC
false
[]
null
267
0
Davlan/conll2003_noMISC
false
[]
null
268
0
Davlan/masakhanerV1
false
[]
null
135
0
DelgadoPanadero/Pokemon
false
[]
null
267
3
DeskDown/ALTDataset
false
[]
null
311
0
DeskDown/ALTDataset_en-to-fil-vi-id-ms-ja-khm
false
[]
null
267
0
DiFronzo/Human_Activity_Recognition
false
[]
null
267
1
Dmitriy612/1
false
[]
null
135
0
DoctorSlimm/yipee
false
[]
null
135
0
Doohae/klue-mrc-bm25
false
[]
null
267
0
Doohae/modern_music_re
false
[]
null
293
0
DoyyingFace/github-embeddings-doy
false
[]
null
267
0
DoyyingFace/github-issues-doy
false
[]
null
135
0
DrishtiSharma/as_opus100_processed
false
[]
null
266
0
DrishtiSharma/bg_opus100_processed
false
[]
null
265
0
DrishtiSharma/br_opus100_processed
false
[]
null
265
0
DrishtiSharma/hi_opus100_processed
false
[]
null
265
0
DrishtiSharma/kk_opus100_processed
false
[]
null
266
0
DrishtiSharma/mr_opus100_processed
false
[]
null
266
0
DrishtiSharma/or_opus100_processed
false
[]
null
267
0
DrishtiSharma/sl_opus100_processed
false
[]
null
267
0
DrishtiSharma/sr_opus100_processed
false
[]
null
268
0
Dumiiii/common-voice-romaniarss
false
[]
null
267
0
EMBO/biolang
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n>1M", "language:en", "license:cc-by-4.0" ]
This dataset is based on abstracts from the open access section of EuropePubMed Central to train language models in the domain of biology.
663
0
EMBO/sd-nlp
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:named-entity-recognition", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0" ]
This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain.
793
0
ESZER/H
false
[]
null
135
0
Emanuel/UD_Portuguese-Bosque
false
[ "language:pt" ]
null
265
1
Emma121/aaaaa
false
[ "license:bsd-3-clause-clear" ]
null
137
0
Emma121/testtest
false
[]
null
135
0
Enes3774/data
false
[]
null
135
0
Exr0n/wiki-entity-similarity
false
[ "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:mit", "named entities", "similarity", "paraphrasing", "synonyms", "wikipedia", "arxiv:2004.04906", "arxiv:2202.13581" ]
null
936
4
Eymen3455/xsum_tr
false
[]
null
135
0
FIG-Loneliness/FIG-Loneliness
false
[]
null
267
1
FL33TW00D/test-dataset
false
[]
null
135
0
FRTNX/cosuju
false
[]
Court Summaries and Judgements (CoSuJu) Dataset
400
0
FRTNX/worldbank-projects
false
[]
null
135
0
Felix-ML/quoteli3
false
[ "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0" ]
This dataset is a representation of Muzny et al.'s QuoteLi3 dataset as a Huggingface dataset. It can be best used for quote attribution.
135
0
Finnish-NLP/mc4_fi_cleaned
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|mc4", "language:fi" ]
null
265
3
Firoj/HumAID
false
[]
The HumAID Twitter dataset consists of several thousands of manually annotated tweets that has been collected during 19 major natural disaster events including earthquakes, hurricanes, wildfires, and floods, which happened from 2016 to 2019 across different parts of the World. The annotations in the provided datasets consists of following humanitarian categories. The dataset consists only english tweets and it is the largest dataset for crisis informatics so far. ** Humanitarian categories ** - Caution and advice - Displaced people and evacuations - Dont know cant judge - Infrastructure and utility damage - Injured or dead people - Missing or found people - Not humanitarian - Other relevant information - Requests or urgent needs - Rescue volunteering or donation effort - Sympathy and support
267
1
Francois/futures_es
false
[]
null
134
0
Fraser/mnist-text-default
false
[]
MNIST dataset adapted to a text-based representation. This allows testing interpolation quality for Transformer-VAEs. System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM Works by quantising each MNIST pixel into one of 64 characters. Every sample has an up & down version to encourage the model to learn rotation invarient features. Use `.array_to_text(` and `.text_to_array(` methods to test your generated data. Data format: - text: (30 x 28 tokens, 840 tokens total): Textual representation of MNIST digit, for example: ``` 00 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 01 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 02 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 03 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 04 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 05 down ! ! ! ! ! ! ! ! ! ! ! ! ! % % % @ C L ' J a ^ @ ! ! ! ! 06 down ! ! ! ! ! ! ! ! ( * 8 G K ` ` ` ` ` Y L ` ] Q 1 ! ! ! ! 07 down ! ! ! ! ! ! ! - \ ` ` ` ` ` ` ` ` _ 8 5 5 / * ! ! ! ! ! 08 down ! ! ! ! ! ! ! % W ` ` ` ` ` R N ^ ] ! ! ! ! ! ! ! ! ! ! 09 down ! ! ! ! ! ! ! ! 5 H ; ` ` T # ! + G ! ! ! ! ! ! ! ! ! ! 10 down ! ! ! ! ! ! ! ! ! $ ! G ` 7 ! ! ! ! ! ! ! ! ! ! ! ! ! ! 11 down ! ! ! ! ! ! ! ! ! ! ! C ` P ! ! ! ! ! ! ! ! ! ! ! ! ! ! 12 down ! ! ! ! ! ! ! ! ! ! ! # P ` 2 ! ! ! ! ! ! ! ! ! ! ! ! ! 13 down ! ! ! ! ! ! ! ! ! ! ! ! ) ] Y I < ! ! ! ! ! ! ! ! ! ! ! 14 down ! ! ! ! ! ! ! ! ! ! ! ! ! 5 ] ` ` > ' ! ! ! ! ! ! ! ! ! 15 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! , O ` ` F ' ! ! ! ! ! ! ! ! 16 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! % 8 ` ` O ! ! ! ! ! ! ! ! 17 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! _ ` _ 1 ! ! ! ! ! ! ! 18 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! , A N ` ` T ! ! ! ! ! ! ! ! 19 down ! ! ! ! ! ! ! ! ! ! ! ! * F Z ` ` ` _ N ! ! ! ! ! ! ! ! 20 down ! ! ! ! ! ! ! ! ! ! ' = X ` ` ` ` S 4 ! ! ! ! ! ! ! ! ! 21 down ! ! ! ! ! ! ! ! & 1 V ` ` ` ` R 5 ! ! ! ! ! ! ! ! ! ! ! 22 down ! ! ! ! ! ! % K W ` ` ` ` Q 5 # ! ! ! ! ! ! ! ! ! ! ! ! 23 down ! ! ! ! . L Y ` ` ` ` ^ B # ! ! ! ! ! ! ! ! ! ! ! ! ! ! 24 down ! ! ! ! C ` ` ` V B B % ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 25 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 26 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! 27 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ``` - label: Just a number with the texts matching label.
266
0
Fraser/mnist-text-no-spaces
false
[]
MNIST dataset adapted to a text-based representation. This allows testing interpolation quality for Transformer-VAEs. System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM Works by quantising each MNIST pixel into one of 64 characters. Every sample has an up & down version to encourage the model to learn rotation invarient features. Use `.array_to_text(` and `.text_to_array(` methods to test your generated data. Removed spaces to get better BPE compression on sequences. **Should only be used with a trained tokenizer.** Data format: - text: (30 x 28 tokens, 840 tokens total): Textual representation of MNIST digit, for example: ``` 00down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 01down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 02down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 03down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 04down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 05down!!!!!!!!!!!!!%%%@CL'Ja^@!!!! 06down!!!!!!!!(*8GK`````YL`]Q1!!!! 07down!!!!!!!-\\````````_855/*!!!!! 08down!!!!!!!%W`````RN^]!!!!!!!!!! 09down!!!!!!!!5H;``T#!+G!!!!!!!!!! 10down!!!!!!!!!$!G`7!!!!!!!!!!!!!! 11down!!!!!!!!!!!C`P!!!!!!!!!!!!!! 12down!!!!!!!!!!!#P`2!!!!!!!!!!!!! 13down!!!!!!!!!!!!)]YI<!!!!!!!!!!! 14down!!!!!!!!!!!!!5]``>'!!!!!!!!! 15down!!!!!!!!!!!!!!,O``F'!!!!!!!! 16down!!!!!!!!!!!!!!!%8``O!!!!!!!! 17down!!!!!!!!!!!!!!!!!_`_1!!!!!!! 18down!!!!!!!!!!!!!!,AN``T!!!!!!!! 19down!!!!!!!!!!!!*FZ```_N!!!!!!!! 20down!!!!!!!!!!'=X````S4!!!!!!!!! 21down!!!!!!!!&1V````R5!!!!!!!!!!! 22down!!!!!!%KW````Q5#!!!!!!!!!!!! 23down!!!!.LY````^B#!!!!!!!!!!!!!! 24down!!!!C```VBB%!!!!!!!!!!!!!!!! 25down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 26down!!!!!!!!!!!!!!!!!!!!!!!!!!!! 27down!!!!!!!!!!!!!!!!!!!!!!!!!!!! ``` - label: Just a number with the texts matching label.
267
0
Fraser/mnist-text-small
false
[]
MNIST dataset adapted to a text-based representation. *Modified images to be ~1/4 the original area.* Done by taking a max pool. This allows testing interpolation quality for Transformer-VAEs. System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM Works by quantising each MNIST pixel into one of 64 characters. Every sample has an up & down version to encourage the model to learn rotation invarient features. Use `.array_to_text(` and `.text_to_array(` methods to test your generated data. Data format: - text: (16 x 14 tokens, 224 tokens total): Textual representation of MNIST digit, for example: ``` 00 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! 01 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! 02 down ! ! ! ! ! ! % % C L a ^ ! ! 03 down ! ! ! - ` ` ` ` ` Y ` Q ! ! 04 down ! ! ! % ` ` ` R ^ ! ! ! ! ! 05 down ! ! ! ! $ G ` ! ! ! ! ! ! ! 06 down ! ! ! ! ! # ` Y < ! ! ! ! ! 07 down ! ! ! ! ! ! 5 ` ` F ! ! ! ! 08 down ! ! ! ! ! ! ! % ` ` 1 ! ! ! 09 down ! ! ! ! ! ! F ` ` ` ! ! ! ! 10 down ! ! ! ! 1 ` ` ` ` 4 ! ! ! ! 11 down ! ! L ` ` ` ` 5 ! ! ! ! ! ! 12 down ! ! ` ` V B ! ! ! ! ! ! ! ! 13 down ! ! ! ! ! ! ! ! ! ! ! ! ! ! ``` - label: Just a number with the texts matching label.
329
0