id
stringlengths
2
115
private
bool
1 class
tags
sequence
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
maastrichtlawtech/bsard
false
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:cc-by-nc-sa-4.0", "arxiv:2108.11792" ]
null
259
1
antoinegk/HealthChallenge_dataset
false
[]
null
271
0
anton-l/common_language
false
[ "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|common_voice", "language:ar", "language:br", "language:ca", "language:cnh", "language:cs", "language:cv", "language:cy", "language:de", "language:dv", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fr", "language:fy", "language:ia", "language:id", "language:it", "language:ja", "language:ka", "language:kab", "language:ky", "language:lv", "language:mn", "language:mt", "language:nl", "language:pl", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sah", "language:sl", "language:sv", "language:ta", "language:tr", "language:tt", "language:uk", "language:zh", "license:cc-by-nc-4.0" ]
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
263
0
anton-l/superb
false
[ "task_ids:keyword-spotting", "task_ids:speaker-identification", "task_ids:intent-classification", "task_ids:slot-filling", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "source_datasets:extended|librispeech_asr", "source_datasets:extended|other-librimix", "source_datasets:extended|other-speech_commands", "language:en", "license:unknown", "arxiv:2105.01051" ]
Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .wav format and is not converted to a float32 array. To convert the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
918
1
anton-l/superb_demo
false
[]
Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
3,036
0
anton-l/superb_dummy
false
[]
Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing.
4,017
0
anukaver/EstQA
false
[ "language:et" ]
null
266
0
anuragshas/bg_opus100_processed
false
[]
null
265
0
anuragshas/ha_cc100_processed
false
[]
null
265
0
anuragshas/ha_opus100_processed
false
[]
null
265
0
anuragshas/hi_opus100_processed
false
[]
null
265
0
anuragshas/lv_opus100_processed
false
[]
null
265
0
anuragshas/mr_cc100_processed
false
[]
null
265
0
anuragshas/mt_opus100_processed
false
[]
null
265
0
anuragshas/pa_cc100_processed
false
[]
null
265
0
anuragshas/sk_opus100_processed
false
[]
null
265
0
anuragshas/sl_opus100_processed
false
[]
null
265
0
anuragshas/ur_opus100_processed
false
[]
null
265
0
anushakamath/sv_corpora_parliament_processed_v0
false
[]
null
265
0
anzorq/kbd-ru-1.67M-temp
false
[]
null
265
0
anzorq/kbd-ru-jsonl-tmp
false
[]
null
265
0
anzorq/kbd-ru-temp
false
[]
null
265
0
arch-raven/MAMI
false
[]
null
263
0
arjundd/meddlr-data
false
[ "license:apache-2.0" ]
null
266
0
arjunth2001/online_privacy_qna
false
[]
null
265
1
artemis13fowl/github-issues
false
[]
null
265
0
artyeth/Dorian
false
[]
null
134
0
aryanpatke/github-issues
false
[]
null
134
0
lmqg/qg_jaquad
false
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:SkelterLabsInc/JaQuAD", "language:ja", "license:cc-by-sa-3.0", "question-generation", "arxiv:2210.03992" ]
[JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset for question generation (QG) task. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
867
3
lmqg/qg_squad
false
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:squad", "language:en", "license:cc-by-4.0", "question-generation", "arxiv:2210.03992", "arxiv:1705.00106" ]
[SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) evaluation set for the question generation (QG) models. The split of test and development set follows the ["Neural Question Generation"](https://arxiv.org/abs/1705.00106) work and is compatible with the [leader board](https://paperswithcode.com/sota/question-generation-on-squad11).
3,825
3
aseifert/merlin
false
[ "multilinguality:translation", "size_categories:unknown", "language:cz", "language:de", "language:it" ]
null
264
0
aseifert/pie-synthetic
false
[ "multilinguality:translation", "size_categories:unknown", "language:en" ]
null
264
1
ashraq/dhivehi-corpus
false
[]
This is a dataset put together to pretrain a language model in Dhivehi, the language of Maldives.
274
2
asi/wikitext_fr
false
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:fr", "license:cc-by-sa-4.0", "arxiv:1609.07843" ]
Wikitext-fr language modeling dataset consists of over 70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" or "good articles.". The aim is to replicate the English benchmark.
423
2
asoroa/bsbasque
false
[]
BSBasque dataset. The text is extracted from the following domains: https://www.berria.eus https://eu.wikipedia.org https://goiena.eus https://www.argia.eus https://goierri.hitza.eus
265
0
astarostap/antisemitic-tweets
false
[]
null
134
0
astarostap/antisemitic_tweets
false
[]
null
134
0
astarostap/autonlp-data-antisemitism-2
false
[ "task_categories:text-classification", "language:en" ]
null
263
0
astremo/friendly_JA_corpus
false
[]
null
266
2
astrideducation/cefr-combined-no-cefr-test
false
[]
This dataset contains 3370555 sentences, which each have an assigned CEFR level derived from EFLLex (https://cental.uclouvain.be/cefrlex/efllex/download). The sentences comes from "the pile books3", which is available on Huggingface (https://huggingface.co/datasets/the_pile_books3). The CEFR levels used are A1, A2, B1, B2 and C1, and there are equals number of sentences for each level. Assigning each sentence a CEFR level followed is based on the concept of "shifted frequency distribution", introduced by David Alfter and his paper can be found at (https://gupea.ub.gu.se/bitstream/2077/66861/4/gupea_2077_66861_4.pdf). For each word in each sentence, take the CEFR level with the highest "shifted frequency distribution" in the EFLLex table. After all words have been processed, the sentence gets annotated with the most frequently appearing CEFR level from the whole senctence.
265
0
atelders/politweets
false
[]
null
134
0
athar/QA
false
[]
null
265
0
athar/a_b
false
[]
null
134
1
austin/rheum_abstracts
false
[]
null
265
0
avadesian/dddd
false
[]
null
134
0
avanishcobaltest/datasetavanish
false
[]
null
134
0
averyanalex/panorama
false
[]
null
134
0
azuur/es_corpora_parliament_processed
false
[]
null
265
0
azuur/gn_wiki_cleaned
false
[]
null
265
0
badranx/opus_raw
false
[]
mono corpus from http://www.opensubtitles.org/. Please check http://www.opensubtitles.org/ for the available corpora and licenses.
265
1
bavard/personachat_truecased
false
[]
A version of the PersonaChat dataset that has been true-cased, and also has been given more normalized punctuation. The original PersonaChat dataset is in all lower case, and has extra space around each clause/sentence separating punctuation mark. This version of the dataset has more of a natural language look, with sentence capitalization, proper noun capitalization, and normalized whitespace. Also, each dialogue turn includes a pool of distractor candidate responses, which can be used by a multiple choice regularization loss during training.
800
9
be4rr/github-issues
false
[]
null
265
0
beacon/test
false
[]
null
134
0
benjaminbeilharz/better_daily_dialog
false
[]
null
265
0
benjaminbeilharz/daily_dialog_w_turn_templates
false
[]
null
265
0
benjaminbeilharz/empathetic_dialogues_for_lm
false
[]
null
265
0
berkergurcay/2020-10K-Reports
false
[]
null
134
1
bertin-project/mc4-es-sampled
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "size_categories:10M<n<100M", "size_categories:100M<n<1B", "source_datasets:mc4", "source_datasets:bertin-project/mc4-sampling", "language:es", "license:odc-by", "arxiv:1910.10683", "arxiv:2207.06814" ]
50 million documents in Spanish extracted from mC4 applying perplexity sampling via mc4-sampling: "https://huggingface.co/datasets/bertin-project/mc4-sampling". Please, refer to BERTIN Project. The original dataset is the Multlingual Colossal, Cleaned version of Common Crawl's web crawl corpus (mC4), based on the Common Crawl dataset: "https://commoncrawl.org", and processed by AllenAI.
534
0
bertin-project/mc4-sampling
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "size_categories:10M<n<100M", "size_categories:100M<n<1B", "size_categories:1B<n<10B", "source_datasets:original", "language:af", "language:am", "language:ar", "language:az", "language:be", "language:bg", "language:bn", "language:ca", "language:ceb", "language:co", "language:cs", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fil", "language:fr", "language:fy", "language:ga", "language:gd", "language:gl", "language:gu", "language:ha", "language:haw", "language:hi", "language:hmn", "language:ht", "language:hu", "language:hy", "language:id", "language:ig", "language:is", "language:it", "language:iw", "language:ja", "language:jv", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:la", "language:lb", "language:lo", "language:lt", "language:lv", "language:mg", "language:mi", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:ne", "language:nl", "language:no", "language:ny", "language:pa", "language:pl", "language:ps", "language:pt", "language:ro", "language:ru", "language:sd", "language:si", "language:sk", "language:sl", "language:sm", "language:sn", "language:so", "language:sq", "language:sr", "language:st", "language:su", "language:sv", "language:sw", "language:ta", "language:te", "language:tg", "language:th", "language:tr", "language:uk", "language:und", "language:ur", "language:uz", "language:vi", "language:xh", "language:yi", "language:yo", "language:zh", "language:zu", "license:odc-by", "arxiv:1910.10683" ]
A sampling-enabled version of mC4, the colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is a version of the processed version of Google's mC4 dataset by AllenAI, in which sampling methods are implemented to perform on the fly.
190
6
bhadresh-savani/web_split
false
[]
null
265
1
bhavnicksm/sentihood
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1610.03771" ]
null
301
2
bhigy/buckeye_asr
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:other" ]
The Buckeye Corpus of conversational speech contains high-quality recordings from 40 speakers in Columbus OH conversing freely with an interviewer. The speech has been orthographically transcribed and phonetically labeled.
275
0
bigscience/P3
false
[ "task_categories:other", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:100M<n<1B", "language:en", "license:apache-2.0", "arxiv:2110.08207" ]
P3 (Public Pool of Prompts) is a collection of prompted English datasets covering a diverse set of NLP tasks. A prompt is the combination of an input template and a target template. The templates are functions mapping a data example into natural language for the input and target sequences. For example, in the case of an NLI dataset, the data example would include fields for *Premise, Hypothesis, Label*. An input template would be *If {Premise} is true, is it also true that {Hypothesis}?*, whereas a target template can be defined with the label choices *Choices[label]*. Here *Choices* is prompt-specific metadata that consists of the options *yes, maybe, no* corresponding to *label* being entailment (0), neutral (1) or contradiction (2). Prompts are collected using [Promptsource](https://github.com/bigscience-workshop/promptsource), an interface to interactively write prompts on datasets, and collect prompt-specific metadata such as evaluation metrics. As of October 13th, there are 2'000 prompts collected for 270+ data(sub)sets. The collection of prompts of P3 is publicly available on [Promptsource](https://github.com/bigscience-workshop/promptsource). To train [T0*](https://huggingface.co/bigscience/T0pp), we used a subset of the prompts available in Promptsource (see details [here](https://huggingface.co/bigscience/T0pp#training-data)). However, some of the prompts use `random.choice`, a method that selects uniformly at random an option in a list of valid possibilities. For reproducibility purposes, we release the collection of prompted examples used to train T0*. **The data available here are the materialized version of the prompted datasets used in [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207) which represent only a subset of the datasets for which there is at least one prompt in Promptsource.**
265,518
105
bigscience-catalogue-data-dev/lm_code_github-eval_subset
false
[]
null
265
0
bigscience-historical-texts/HIPE2020_sent-split
false
[]
TODO
529
0
bigscience-historical-texts/hipe2020
false
[ "language:de", "language:en", "language:fr" ]
TODO
529
2
bingzhen/test2
false
[]
null
134
0
birgermoell/sv_corpora_parliament_processed
false
[]
null
265
0
bitmorse/kickstarter_2022-2021
false
[]
null
265
1
biu-nlp/qa_align
false
[]
This dataset contains QA-Alignments - annotations of cross-text content overlap. The task input is two sentences from two documents, roughly talking about the same event, along with their QA-SRL annotations which capture verbal predicate-argument relations in question-answer format. The output is a cross-sentence alignment between sets of QAs which denote the same information. See the paper for details: QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions, Brook Weiss et. al., EMNLP 2021. Here we provide both the QASRL annotations and the QA-Align annotations for the target sentences.
265
0
biu-nlp/qa_discourse
false
[]
The dataset contains question-answer pairs to model discourse relations. While answers roughly correspond to spans of the sentence, these spans could have been freely adjusted by annotators to grammaticaly fit the question; Therefore, answers are given just as text and not as identified spans of the original sentence. See the paper for details: QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines, Pyatkin et. al., 2020
265
0
biu-nlp/qa_srl2018
false
[]
The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. This dataset, a.k.a "QASRL Bank", "QASRL-v2" or "QASRL-LS" (Large Scale), was constructed via crowdsourcing.
400
1
biu-nlp/qa_srl2020
false
[]
The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. This dataset, a.k.a "QASRL-GS" (Gold Standard) or "QASRL-2020", was constructed via controlled crowdsourcing. See the paper for details: Controlled Crowdsourcing for High-Quality QA-SRL Annotation, Roit et. al., 2020
558
0
biu-nlp/qamr
false
[]
Question-Answer Meaning Representations (QAMR) are a new paradigm for representing predicate-argument structure, which makes use of free-form questions and their answers in order to represent a wide range of semantic phenomena. The semantic expressivity of QAMR compares to (and in some cases exceeds) that of existing formalisms, while the representations can be annotated by non-experts (in particular, using crowdsourcing). Formal Notes: * The `answer_ranges` feature here has a different meaning from that of the `qanom` and `qa_srl` datasets, although both are structured the same way; while in qasrl/qanom, each "answer range" (i.e. each span, represented as [begin-idx, end-idx]) stands for an independant answer which is read separately (e.g., "John Vincen", "head of marketing"), in this `qamr` dataset each question has a single answer who might be conposed of non-consecutive spans; that is, all given spans should be read successively. * Another difference is that the meaning of `predicate` in QAMR is different and softer than in QASRL/QANom - here, the predicate is not necessarily within the question, it can also be in the answer; it is generally what the annotator marked as the focus of the QA.
264
0
biu-nlp/qanom
false
[]
The dataset contains question-answer pairs to model predicate-argument structure of deverbal nominalizations. The questions start with wh-words (Who, What, Where, What, etc.) and contain a the verbal form of a nominalization from the sentence; the answers are phrases in the sentence. See the paper for details: QANom: Question-Answer driven SRL for Nominalizations (Klein et. al., COLING 2020) For previewing the QANom data along with the verbal annotations of QASRL, check out "https://browse.qasrl.org/". This dataset was annotated by selected workers from Amazon Mechanical Turk.
264
1
blinoff/medical_qa_ru_data
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ru", "license:unknown" ]
This dataset contains 190,335 Russian Q&A posts from a medical related forum.
268
6
bobbydylan/top2k
false
[]
null
264
0
braincode/braincode
false
[]
null
136
0
brunodorneles/ner
false
[]
null
265
0
bryantpwhite/Medieval_Sermons_in_French
false
[]
null
134
0
bs-modeling-metadata/OSCAR_Entity_13_000
false
[]
null
265
0
bs-modeling-metadata/c4-en-html-with-metadata
false
[]
null
265
4
bs-modeling-metadata/c4_newslike_url_only
false
[]
null
265
0
bs-modeling-metadata/website_metadata_c4
false
[]
null
269
1
bs-modeling-metadata/wiki_dump
false
[]
null
134
0
bstad/github-issues
false
[]
null
265
1
bwu2018/anime-tagging-dataset
false
[]
null
265
3
caca/zscczs
false
[]
null
134
0
cahya/persona_empathetic
false
[ "license:mit" ]
null
136
0
cakiki/args_me
false
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:'en-US'", "license:cc-by-4.0" ]
The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal.
525
1
cakiki/arxiv-metadata
false
[ "license:cc0-1.0" ]
null
136
0
cakiki/en_wiki_quote
false
[ "license:cc-by-sa-3.0" ]
null
263
0
cakiki/paperswithcode
false
[]
The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal.
802
0
caltonji/harrypotter_squad_v2
false
[]
null
265
0
caltonji/harrypotter_squad_v2_2
false
[]
null
269
0
calvpang/github-issues
false
[]
null
265
0
cameronbc/synthtiger
false
[]
A synthetic scene text OCR dataset derived from the [SynthTIGER](https://github.com/clovaai/synthtiger) generator.
133
0
cassandra-themis/QR-AN
false
[ "task_categories:summarization", "task_categories:text-classification", "task_categories:text-generation", "task_ids:multi-class-classification", "task_ids:topic-classification", "size_categories:10K<n<100K", "language:fr", "conditional-text-generation" ]
QR-AN Dataset: a classification dataset on french Parliament debates This is a dataset for theme/topic classification, made of questions and answers from https://www2.assemblee-nationale.fr/recherche/resultats_questions. It contains 188 unbalanced classes, 80k questions-answers divided into 3 splits: train (60k), val (10k) and test (10k).
658
1
castorini/afriberta-corpus
false
[ "task_categories:text-generation", "task_ids:language-modeling", "language:om", "language:am", "language:rw", "language:rn", "language:ha", "language:ig", "language:pcm", "language:so", "language:sw", "language:ti", "language:yo", "language:multilingual", "license:apache-2.0" ]
Corpus used for training AfriBERTa models
1,506
5
castorini/mr-tydi-corpus
false
[ "task_categories:text-retrieval", "multilinguality:multilingual", "language:ar", "language:bn", "language:en", "language:fi", "language:id", "language:ja", "language:ko", "language:ru", "language:sw", "language:te", "language:th", "license:apache-2.0" ]
null
2,050
1