id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
NYTK/HuCOLA
false
[ "task_ids:text-simplification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:hu", "license:cc-by-sa-4.0" ]
null
265
0
NYTK/HuCoPA
false
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|other", "language:hu", "license:bsd-2-clause", "commonsense-reasoning" ]
null
266
0
NYTK/HuRC
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:abstractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|other", "language:hu", "license:cc-by-4.0" ]
null
265
0
NYTK/HuSST
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:text-scoring", "annotations_creators:found", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|other", "language:hu", "license:bsd-2-clause" ]
null
265
0
NYTK/HuWNLI
false
[ "task_categories:other", "task_ids:coreference-resolution", "annotations_creators:found", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|other", "language:hu", "license:cc-by-sa-4.0", "structure-prediction" ]
null
265
2
NahedAbdelgaber/evaluating-student-writing
false
[]
null
267
0
Narsil/asr_dummy
false
[]
Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
432
0
Narsil/conversational_dummy
false
[]
null
265
0
Narsil/image_dummy
false
[]
\
266
0
Narsil/test_data
false
[]
null
267
0
Nathanael/NPS
false
[]
null
134
0
Navigator/dodydard-marty
false
[]
null
134
0
NbAiLab/NCC_small_100
false
[ "arxiv:2104.09617" ]
\\nNorwegian Colossal Corpus v2. Short sequences of maximum 100k characters.
265
0
NbAiLab/NCC_small_divided
false
[ "arxiv:2104.09617" ]
\\nNorwegian Colossal Corpus v2. Short sequences of maximum 100k characters.
135
0
NbAiLab/NPSC
false
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:2G<n<1B", "source_datasets:original", "language:no", "language:nb", "language:nn", "license:cc0-1.0", "speech-modeling", "arxiv:2201.10881" ]
The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway. NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings. The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words.
271
3
NbAiLab/NPSC_test
false
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:2G<n<1B", "source_datasets:original", "language:nb", "language:no", "language:nn", "license:cc0-1.0", "speech-modeling" ]
null
265
0
NbAiLab/NPSC_test2
false
[ "license:cc0-1.0" ]
null
133
0
NbAiLab/bokmaal_admin
false
[ "arxiv:2104.09617" ]
\\nNorwegian Colossal Corpus v2. Short sequences of maximum 100k characters.
266
0
NbAiLab/norec_agg
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2011.02686" ]
Aggregated NoRec_fine: A Fine-grained Sentiment Dataset for Norwegian This dataset was created by the Nordic Language Processing Laboratory by aggregating the fine-grained annotations in NoReC_fine and removing sentences with conflicting or no sentiment.
264
0
NbAiLab/norne
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:no", "license:other", "structure-prediction", "arxiv:1911.12146" ]
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
134
2
NbAiLab/norwegian_parliament
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:no", "license:cc-by-4.0" ]
The Norwegian Parliament Speeches is a collection of text passages from 1998 to 2016 and pronounced at the Norwegian Parliament (Storting) by members of the two major parties: Fremskrittspartiet and Sosialistisk Venstreparti.
265
0
Niciu/github-issues
false
[]
null
134
0
Niciu/test-cre-dataset-issues
false
[]
null
135
0
Niciu/test-squad
false
[]
null
135
0
NikolajW/NPS_nonNormalized-Cased
false
[]
null
135
0
NishinoTSK/leishmaniaV2
false
[]
null
135
0
NishinoTSK/leishmaniav1
false
[]
null
267
0
Nuwaisir/Quran_speech_recognition_kaggle
false
[]
null
265
0
Ofrit/tmp
false
[]
null
135
0
Omar2027/caner_replicate
false
[]
Classical Arabic Named Entity Recognition corpus as a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities.
267
0
OmarN121/train
false
[]
null
267
0
PDJ107/riot-data
false
[]
null
136
0
Paul/hatecheck
false
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2012.15606" ]
null
295
2
PaulLerner/triviaqa_for_viquae
false
[]
null
267
0
PaulLerner/triviaqa_splits_for_viquae
false
[]
null
267
0
PaulLerner/viquae_all_images
false
[]
null
135
0
PaulLerner/viquae_dataset
false
[]
null
277
2
PaulLerner/viquae_images
false
[]
null
135
0
PaulLerner/viquae_wikipedia
false
[]
null
273
0
Pengfei/asfwe
false
[]
null
135
0
Pengfei/test
false
[]
null
135
0
Pengfei/test1
false
[]
null
135
0
PereLluis13/parla_text_corpus
false
[]
null
267
0
PereLluis13/spanish_speech_text
false
[]
null
265
0
Perkhad/corejur
false
[]
null
267
0
PlanTL-GOB-ES/SQAC
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:es", "license:cc-by-sa-4.0", "arxiv:1606.05250" ]
This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment. The sources of the contexts are: * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode). * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/). * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence] (https://creativecommons.org/licenses/by/4.0/legalcode). This dataset can be used to build extractive-QA.
292
2
PlanTL-GOB-ES/cantemist-ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:es", "license:cc-by-4.0", "biomedical", "clinical", "spanish" ]
https://temu.bsc.es/cantemist/
267
1
PlanTL-GOB-ES/pharmaconer
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:es", "license:cc-by-4.0", "biomedical", "clinical", "spanish" ]
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje (Plan TL). It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online). The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR. The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each. In terms of training examples, this translates to a total of 8074, 3764 and 3931 annotated sentences in each set. The original dataset was distributed in Brat format (https://brat.nlplab.org/standoff.html). For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to [email protected]
274
0
Plim/common_voice_7_0_fr_processed
false
[ "language:fr" ]
null
265
0
Plim/fr_corpora_parliament_processed
false
[ "language:fr" ]
null
265
0
Plim/fr_wikipedia_processed
false
[ "language:fr" ]
null
265
0
Plim/language_model_fr
false
[ "language:fr" ]
null
264
0
Pongsaky/Wiki_SCG
false
[]
null
135
0
Pratik/Gujarati_OpenSLR
false
[]
null
134
0
Pyjay/emotion_nl
false
[]
null
143
0
Pyke/patent_abstract
false
[]
null
134
0
QA/abk-eng
false
[]
null
135
0
R0bk/XFUN
false
[ "license:mit" ]
null
152
0
RBG-AI/CoRePooL
false
[]
null
135
0
Recognai/ag_news_corrected_labels
false
[]
null
267
0
Recognai/corrected_labels_ag_news
false
[]
null
267
0
Recognai/gutenberg_spacy-ner
false
[]
null
135
0
Recognai/imdb_spacy-ner
false
[]
null
135
0
Recognai/news
false
[]
null
267
0
Recognai/sentiment-banking
false
[]
null
1,691
0
Recognai/veganuary
false
[]
null
267
0
Remesita/tagged_reviews
false
[]
null
134
0
RohanAiLab/persian_blog
false
[ "task_categories:text-generation", "task_ids:language-modeling", "source_datasets:original", "language:fa" ]
persian_blog is a dataset consist of 400K blog posts from various websites and has types of tones. this dataset can be used in different NLG tasks and as a show-case it's is used in training reformer-persian.
327
1
RohanAiLab/persian_daily_news
false
[ "source_datasets:original", "language:fa" ]
Persian Daily News dataset is a collection of 2 million news articles with the headline of each news article. This dataset contains news articles and their summaries for the last 10 years. This dataset is provided by Rohan AI lab for research purposes.
263
0
RohanAiLab/persian_news_dataset
false
[ "task_categories:text-classification", "task_ids:language-modeling", "task_ids:multi-class-classification", "source_datasets:original", "language:fa" ]
persian_news_dataset is a collection of 5 million news articles. News articles have been gathered from more than 10 news agencies for the last 12 years. The dataset is provided by Rohan AI lab for research purposes. for more information refer to this link:
266
1
RollingMuffin/test_scripts
false
[]
This dataset is designed to generate lyrics with HuggingArtists.
267
0
RuudVelo/commonvoice_mt_8_processed
false
[]
null
135
0
RuudVelo/commonvoice_nl_8_processed
false
[]
null
135
0
RuudVelo/nl_corpora_parliament_processed
false
[]
null
266
0
SCourthial/test
false
[]
null
135
0
Sabokou/qg_squad_modified
false
[]
null
267
0
Sabokou/qg_squad_modified_dev
false
[]
null
267
0
SajjadAyoubi/persian_qa
false
[]
\\\\\\\Persian Question Answering (PersianQA) Dataset is a reading comprehension dataset on Persian Wikipedia. The crowd-sourced dataset consists of more than 9,000 entries. Each entry can be either an impossible to answer or a question with one or more answers spanning in the passage (the context) from which the questioner proposed the question. Much like the SQuAD2.0 dataset, the impossible or unanswerable questions can be utilized to create a system which "knows that it doesn't know the answer".
282
2
Sakonii/nepalitext-language-model-dataset
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "language_creators:other", "multilinguality:monolingual", "source_datasets:extended|oscar", "source_datasets:extended|cc100", "language:ne", "license:cc0-1.0" ]
null
271
0
Sam2021/Arguement_Mining_CL2017
false
[]
tokens along with chunk id. IOB1 format Begining of arguement denoted by B-ARG,inside arguement denoted by I-ARG, other chunks are O Orginial train,test split as used by the paper is provided
267
1
Samip/func
false
[]
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
267
0
SaulLu/Natural_Questions_HTML
false
[]
null
266
0
SaulLu/Natural_Questions_HTML_Toy
false
[]
null
267
0
SaulLu/Natural_Questions_HTML_reduced_all
false
[]
null
266
0
SaulLu/test
false
[]
null
135
0
SaulLu/toy_struc_dataset
false
[]
null
267
0
SebastianS/github-issues
false
[ "task_categories:text-classification", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "language:en-US" ]
null
265
0
SergeiGKS/wikiner_fr_job
false
[]
null
136
0
Serhii/Custom_SQuAD
false
[]
Shellcode_IA32 is a dataset for shellcode generation from English intents. The shellcodes are compilable on Intel Architecture 32-bits.
267
0
SetFit/20_newsgroups
false
[]
null
3,021
3
SetFit/TREC-QC
false
[]
null
328
0
SetFit/ag_news
false
[]
null
795
0
SetFit/amazon_counterfactual
false
[ "arxiv:2104.06893" ]
The dataset contains sentences from Amazon customer reviews (sampled from Amazon product review dataset) annotated for counterfactual detection (CFD) binary classification. Counterfactual statements describe events that did not or cannot take place. Counterfactual statements may be identified as statements of the form – If p was true, then q would be true (i.e. assertions whose antecedent (p) and consequent (q) are known or assumed to be false).
666
0
SetFit/amazon_counterfactual_en
false
[ "arxiv:2104.06893" ]
null
722
0
SetFit/amazon_polarity
false
[]
null
267
0
SetFit/bbc-news
false
[]
null
443
3
SetFit/emotion
false
[]
null
4,082
4
SetFit/enron_spam
false
[]
null
1,359
5
SetFit/ethos
false
[]
ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: Ethos_Dataset_Binary: contains 998 comments in the dataset alongside with a label about hate speech presence or absence. 565 of them do not contain hate speech, while the rest of them, 433, contain. Ethos_Dataset_Multi_Label: which contains 8 labels for the 433 comments with hate speech content. These labels are violence (if it incites (1) or not (0) violence), directed_vs_general (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, gender, race, national_origin, disability, religion and sexual_orientation.
400
0
SetFit/ethos_binary
false
[]
null
275
0