id
stringlengths 2
115
| private
bool 1
class | tags
sequence | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
castorini/mr-tydi | false | [
"task_categories:text-retrieval",
"multilinguality:multilingual",
"language:ar",
"language:bn",
"language:en",
"language:fi",
"language:id",
"language:ja",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"language:th",
"license:apache-2.0"
] | null | 2,178 | 2 |
castorini/msmarco_v1_doc_doc2query-t5_expansions | false | [
"language:en",
"license:apache-2.0"
] | null | 267 | 0 |
castorini/msmarco_v1_doc_segmented_doc2query-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 263 | 0 |
castorini/msmarco_v1_passage_doc2query-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 268 | 0 |
castorini/msmarco_v2_doc_doc2query-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 263 | 0 |
castorini/msmarco_v2_doc_segmented_doc2query-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 263 | 0 |
castorini/msmarco_v2_passage_doc2query-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 265 | 0 |
castorini/nq_gar-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 261 | 1 |
castorini/triviaqa_gar-t5_expansions | false | [
"language:English",
"license:Apache License 2.0"
] | null | 262 | 0 |
caythuoc/caoduoclieu | false | [] | null | 133 | 0 |
cbrew475/hwu66 | false | [] | This project contains natural language data for human-robot interaction in a projecthome domain which
Xingkun Liu et al, from Heriot-Watt University, collected and annotated. It can be used for evaluating
NLU services/platforms. | 264 | 0 |
ccccccc/hdjw_94ejrjr | false | [] | null | 133 | 0 |
ccdv/arxiv-classification | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"size_categories:10K<n<100K",
"language:en",
"long context"
] | Arxiv Classification Dataset: a classification of Arxiv Papers (11 classes).
It contains 11 slightly unbalanced classes, 33k Arxiv Papers divided into 3 splits: train (23k), val (5k) and test (5k).
Copied from "Long Document Classification From Local Word Glimpses via Recurrent Attention Learning" by JUN HE LIQUN WANG LIU LIU, JIAO FENG AND HAO WU
See: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8675939
See: https://github.com/LiqunW/Long-document-dataset | 611 | 5 |
ccdv/arxiv-summarization | false | [
"task_categories:summarization",
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"conditional-text-generation"
] | Arxiv dataset for summarization.
From paper: A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents" by A. Cohan et al.
See: https://aclanthology.org/N18-2097.pdf
See: https://github.com/armancohan/long-summarization | 1,720 | 16 |
ccdv/cnn_dailymail | false | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation"
] | CNN/DailyMail non-anonymized summarization dataset.
There are two features:
- article: text of news article, used as the document to be summarized
- highlights: joined text of highlights with <s> and </s> around each
highlight, which is the target summary | 4,998 | 3 |
ccdv/govreport-summarization | false | [
"task_categories:summarization",
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"conditional-text-generation",
"arxiv:2104.02112"
] | GovReport dataset for summarization.
From paper: Efficient Attentions for Long Document Summarization" by L. Huang et al.
See: https://arxiv.org/pdf/2104.02112.pdf
See: https://github.com/luyang-huang96/LongDocSum | 433 | 4 |
ccdv/patent-classification | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"size_categories:10K<n<100K",
"language:en",
"long context"
] | Patent Classification Dataset: a classification of Patents (9 classes).
It contains 9 unbalanced classes, 35k Patents and summaries divided into 3 splits: train (25k), val (5k) and test (5k).
Data are sampled from "BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization." by Eva Sharma, Chen Li and Lu Wang
See: https://aclanthology.org/P19-1212.pdf
See: https://evasharma.github.io/bigpatent/ | 633 | 3 |
ccdv/pubmed-summarization | false | [
"task_categories:summarization",
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"conditional-text-generation"
] | PubMed dataset for summarization.
From paper: A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents" by A. Cohan et al.
See: https://aclanthology.org/N18-2097.pdf
See: https://github.com/armancohan/long-summarization | 1,742 | 12 |
cdleong/piglatin-mt | false | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit"
] | \\r\nPig-latin machine and English parallel machine translation corpus.
Based on
The Project Gutenberg EBook of "De Bello Gallico" and Other Commentaries
https://www.gutenberg.org/ebooks/10657
Converted to pig-latin with https://github.com/bpabel/piglatin | 262 | 0 |
cdleong/temp_africaNLP_keyword_spotting_for_african_languages | false | [
"language:wo",
"language:fuc",
"language:srr",
"language:mnk",
"language:snk"
] | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | 132 | 0 |
cdminix/iwslt2011 | false | [] | Both manual transcripts and ASR outputs from the IWSLT2011 speech translation evalutation campaign are often used for the related punctuation annotation task. This dataset takes care of preprocessing said transcripts and automatically inserts punctuation marks given in the manual transcripts in the ASR outputs using Levenshtein aligment. | 134 | 0 |
cdminix/mgb1 | false | [] | The first edition of the Multi-Genre Broadcast (MGB-1) Challenge is an evaluation of speech recognition, speaker diarization, and lightly supervised alignment using TV recordings in English.
The speech data is broad and multi-genre, spanning the whole range of TV output, and represents a challenging task for speech technology.
In 2015, the challenge used data from the British Broadcasting Corporation (BBC). | 264 | 0 |
cem/dnm | false | [] | null | 263 | 0 |
cem/film | false | [] | null | 133 | 0 |
cemigo/taylor_vs_shakes | false | [] | null | 133 | 0 |
cemigo/test-data | false | [] | null | 133 | 0 |
cestwc/adapted-msrcomp | false | [] | null | 264 | 0 |
cestwc/adapted-paranmt5m | false | [] | null | 265 | 2 |
cestwc/adapted-sentcomp | false | [] | null | 265 | 0 |
cestwc/adapted-synonym | false | [] | null | 263 | 0 |
cestwc/adapted-wikismall | false | [] | null | 264 | 0 |
cestwc/adapted-wordnet | false | [] | null | 268 | 1 |
cestwc/asrc | false | [] | null | 265 | 0 |
cestwc/cnn_dailymail-metaeval100 | false | [] | null | 265 | 0 |
cestwc/cnn_dailymail-snippets | false | [] | null | 267 | 0 |
cestwc/cnn_dailymail-test50 | false | [] | null | 271 | 0 |
cestwc/conjnli | false | [] | null | 267 | 0 |
cestwc/sac-approx-1 | false | [] | null | 265 | 0 |
cestwc/sac-na | false | [] | null | 267 | 0 |
cestwc/sac | false | [] | null | 266 | 0 |
cfilt/iitb-english-hindi | false | [] | null | 941 | 6 |
cgarciae/point-cloud-mnist | false | [] | The MNIST dataset consists of 70,000 28x28 black-and-white points in 10 classes (one for each digits), with 7,000
points per class. There are 60,000 training points and 10,000 test points. | 265 | 2 |
chau/ink_test01 | false | [
"license:other"
] | null | 266 | 0 |
chenghao/mc4_eu_dedup | false | [] | null | 266 | 0 |
chenghao/mc4_sw_dedup | false | [] | null | 264 | 0 |
chenghao/scielo_books | false | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:pt",
"language:es",
"license:cc-by-nc-sa-3.0"
] | null | 263 | 0 |
chenyuxuan/wikigold | false | [] | WikiGold dataset. | 334 | 0 |
cheulyop/dementiabank | false | [] | DementiaBank Pitt Corpus includes audios and transcripts of 99 controls and 194 dementia patients. These transcripts and audio files were gathered as part of a larger protocol administered by the Alzheimer and Related Dementias Study at the University of Pittsburgh School of Medicine. The original acquisition of the DementiaBank data was supported by NIH grants AG005133 and AG003705 to the University of Pittsburgh. Participants included elderly controls, people with probable and possible Alzheimer’s Disease, and people with other dementia diagnoses. Data were gathered longitudinally, on a yearly basis. | 265 | 0 |
cheulyop/ksponspeech | false | [] | KsponSpeech is a large-scale spontaneous speech corpus of Korean conversations. This corpus contains 969 hrs of general open-domain dialog utterances, spoken by about 2,000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. KsponSpeech is publicly available on an open data hub site of the Korea government. (https://aihub.or.kr/aidata/105) | 290 | 2 |
chitra/contradiction | false | [] | null | 264 | 0 |
chitra/contradictionNLI | false | [] | null | 266 | 0 |
chmanoj/ai4bharat__samanantar_processed_te | false | [] | null | 264 | 0 |
chopey/dhivehi | false | [] | null | 264 | 0 |
clarin-pl/2021-punctuation-restoration | false | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:pl"
] | This dataset is designed to be used in training models
that restore punctuation marks from the output of
Automatic Speech Recognition system for Polish language. | 262 | 0 |
clarin-pl/aspectemo | false | [
"task_categories:token-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:mit"
] | AspectEmo dataset: Multi-Domain Corpus of Consumer Reviews for Aspect-Based
Sentiment Analysis | 368 | 1 |
clarin-pl/cst-wikinews | false | [] | CST Wikinews dataset. | 264 | 1 |
clarin-pl/kpwr-ner | false | [
"task_categories:other",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:18K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-3.0",
"structure-prediction"
] | KPWR-NER tagging dataset. | 856 | 4 |
clarin-pl/multiwiki_90k | false | [] | Multi-Wiki90k: Multilingual benchmark dataset for paragraph
segmentation | 264 | 1 |
clarin-pl/nkjp-pos | false | [
"task_categories:other",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:pl",
"license:gpl-3.0",
"structure-prediction"
] | NKJP-POS tagging dataset. | 279 | 1 |
clarin-pl/polemo2-official | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:8K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:cc-by-sa-4.0"
] | PolEmo 2.0: Corpus of Multi-Domain Consumer Reviews, evaluation data for article presented at CoNLL. | 2,624 | 3 |
classla/FRENK-hate-en | false | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"hate-speech-detection",
"offensive-language",
"arxiv:1906.02045"
] | The FRENK Datasets of Socially Unacceptable Discourse in English. | 658 | 1 |
classla/FRENK-hate-hr | false | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:hr",
"license:other",
"hate-speech-detection",
"offensive-language",
"arxiv:1906.02045"
] | The FRENK Datasets of Socially Unacceptable Discourse in Croatian. | 537 | 0 |
classla/FRENK-hate-sl | false | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:sl",
"license:other",
"hate-speech-detection",
"offensive-language",
"arxiv:1906.02045"
] | The FRENK Datasets of Socially Unacceptable Discourse in Slovene. | 520 | 0 |
classla/copa_hr | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:hr",
"license:cc-by-sa-4.0",
"causal-reasoning",
"textual-entailment",
"commonsense-reasoning",
"arxiv:2005.00333",
"arxiv:2104.09243"
] | The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation
of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the
XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises
(My body cast a shadow over the grass), each given a question (What is the cause?), and two choices
(The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible
given the annotator or translator (The sun was rising).
The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the
following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean). | 275 | 0 |
classla/hr500k | false | [
"task_categories:other",
"task_ids:lemmatization",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"language:hr",
"license:cc-by-sa-4.0",
"structure-prediction",
"normalization",
"tokenization"
] | The hr500k training corpus contains about 500,000 tokens manually annotated on the levels of
tokenisation, sentence segmentation, morphosyntactic tagging, lemmatisation and named entities.
On the sentence level, the dataset contains 20159 training samples, 1963 validation samples and 2672 test samples
across the respective data splits. Each sample represents a sentence and includes the following features:
sentence ID ('sent_id'), sentence text ('text'), list of tokens ('tokens'), list of lemmas ('lemmas'),
list of Multext-East tags ('xpos_tags), list of UPOS tags ('upos_tags'),
list of morphological features ('feats'), and list of IOB tags ('iob_tags'). The 'upos_tags' and 'iob_tags' features
are encoded as class labels. | 524 | 0 |
classla/janes_tag | false | [
"task_categories:other",
"task_ids:lemmatization",
"task_ids:part-of-speech",
"language:si",
"license:cc-by-sa-4.0",
"structure-prediction",
"normalization",
"tokenization"
] | The dataset contains 6273 training samples, 762 validation samples and 749 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent_id'),
list of tokens ('tokens'), list of normalised word forms ('norms'), list of lemmas ('lemmas'),
list of Multext-East tags ('xpos_tags), list of morphological features ('feats'),
and list of UPOS tags ('upos_tags'), which are encoded as class labels. | 262 | 0 |
classla/reldi_hr | false | [
"task_categories:other",
"task_ids:lemmatization",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"language:hr",
"license:cc-by-sa-4.0",
"structure-prediction",
"normalization",
"tokenization"
] | The dataset contains 6339 training samples, 815 validation samples and 785 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent_id'),
list of tokens ('tokens'), list of lemmas ('lemmas'), list of UPOS tags ('upos_tags'),
list of Multext-East tags ('xpos_tags), list of morphological features ('feats'),
and list of IOB tags ('iob_tags'), which are encoded as class labels. | 262 | 0 |
classla/reldi_sr | false | [
"task_categories:other",
"task_ids:lemmatization",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"language:sr",
"license:cc-by-sa-4.0",
"structure-prediction",
"normalization",
"tokenization"
] | The dataset contains 5462 training samples, 711 validation samples and 725 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent_id'),
list of tokens ('tokens'), list of lemmas ('lemmas'), list of UPOS tags ('upos_tags'),
list of Multext-East tags ('xpos_tags), list of morphological features ('feats'),
and list of IOB tags ('iob_tags'), which are encoded as class labels. | 262 | 0 |
classla/setimes_sr | false | [
"task_categories:other",
"task_ids:lemmatization",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"language:sr",
"license:cc-by-sa-4.0",
"structure-prediction",
"normalization",
"tokenization"
] | SETimes_sr is a Serbian dataset annotated for morphosyntactic information and named entities.
The dataset contains 3177 training samples, 395 validation samples and 319 test samples
across the respective data splits. Each sample represents a sentence and includes the following features:
sentence ID ('sent_id'), sentence text ('text'), list of tokens ('tokens'), list of lemmas ('lemmas'),
list of Multext-East tags ('xpos_tags), list of UPOS tags ('upos_tags'),
list of morphological features ('feats'), and list of IOB tags ('iob_tags'). The 'upos_tags' and 'iob_tags' features
are encoded as class labels. | 528 | 0 |
classla/ssj500k | false | [
"task_categories:token-classification",
"task_ids:lemmatization",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech",
"language:sl",
"license:cc-by-sa-4.0",
"structure-prediction",
"tokenization",
"dependency-parsing"
] | The dataset contains 7432 training samples, 1164 validation samples and 893 test samples.
Each sample represents a sentence and includes the following features: sentence ID ('sent_id'),
list of tokens ('tokens'), list of lemmas ('lemmas'),
list of Multext-East tags ('xpos_tags), list of UPOS tags ('upos_tags'), list of morphological features ('feats'),
list of IOB tags ('iob_tags'), and list of universal dependency tags ('uds'). Three dataset configurations are
available, where the corresponding features are encoded as class labels: 'ner', 'upos', and 'ud'. | 528 | 0 |
clem/autonlp-data-french_word_detection | false | [] | null | 133 | 1 |
clips/mfaq | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:cs",
"language:da",
"language:de",
"language:en",
"language:es",
"language:fi",
"language:fr",
"language:he",
"language:hr",
"language:hu",
"language:id",
"language:it",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sv",
"language:tr",
"language:vi",
"license:cc0-1.0",
"arxiv:2109.12870"
] | We present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages. | 6,197 | 19 |
clips/mqa | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:ca",
"language:en",
"language:de",
"language:es",
"language:fr",
"language:ru",
"language:ja",
"language:it",
"language:zh",
"language:pt",
"language:nl",
"language:tr",
"language:pl",
"language:vi",
"language:ar",
"language:id",
"language:uk",
"language:ro",
"language:no",
"language:th",
"language:sv",
"language:el",
"language:fi",
"language:he",
"language:da",
"language:cs",
"language:ko",
"language:fa",
"language:hi",
"language:hu",
"language:sk",
"language:lt",
"language:et",
"language:hr",
"language:is",
"language:lv",
"language:ms",
"language:bg",
"language:sr",
"license:cc0-1.0"
] | MQA is a multilingual corpus of questions and answers parsed from the Common Crawl. Questions are divided between Frequently Asked Questions (FAQ) pages and Community Question Answering (CQA) pages. | 36,491 | 15 |
cloverhxy/DADER-source | false | [] | null | 264 | 0 |
cnrcastroli/aaaa | false | [] | null | 133 | 0 |
coala/kkk | false | [] | null | 133 | 0 |
coastalcph/fairlex | false | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"annotations_creators:machine-generated",
"language_creators:found",
"source_datasets:extended",
"language:en",
"language:de",
"language:fr",
"language:it",
"language:zh",
"license:cc-by-nc-sa-4.0",
"bias",
"gender-bias",
"arxiv:2103.13868",
"arxiv:2105.03887"
] | Fairlex: A multilingual benchmark for evaluating fairness in legal text processing. | 758 | 2 |
codeceejay/ng_accent | false | [] | null | 133 | 0 |
cointegrated/ru-paraphrase-NMT-Leipzig | false | [
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"language:ru",
"license:cc-by-4.0",
"conditional-text-generation",
"paraphrase-generation",
"paraphrase"
] | null | 291 | 2 |
collectivat/tv3_parla | false | [
"task_categories:automatic-speech-recognition",
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ca",
"license:cc-by-nc-4.0"
] | This corpus includes 240 hours of Catalan speech from broadcast material.
The details of segmentation, data processing and also model training are explained in Külebi, Öktem; 2018.
The content is owned by Corporació Catalana de Mitjans Audiovisuals, SA (CCMA);
we processed their material and hereby making it available under their terms of use.
This project was supported by the Softcatalà Association. | 262 | 2 |
comodoro/pscr | false | [
"license:cc-by-nc-3.0"
] | null | 262 | 0 |
comodoro/vystadial2016_asr | false | [
"license:cc-by-nc-3.0"
] | This is the Czech data collected during the `VYSTADIAL` project. It is an extension of the 'Vystadial 2013' Czech part data release. The dataset comprises of telephone conversations in Czech, developed for training acoustic models for automatic speech recognition in spoken dialogue systems. | 262 | 1 |
congpt/dstc23_asr | false | [] | null | 133 | 0 |
corypaik/coda | false | [
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2110.08182"
] | *The Color Dataset* (CoDa) is a probing dataset to evaluate the representation of visual properties in language models. CoDa consists of color distributions for 521 common objects, which are split into 3 groups: Single, Multi, and Any. | 266 | 2 |
corypaik/prost | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en-US",
"license:apache-2.0",
"arxiv:2106.03634"
] | *Physical Reasoning about Objects Through Space and Time* (PROST) is a probing dataset to evaluate the ability of pretrained LMs to understand and reason about the physical world. PROST consists of 18,736 cloze-style multiple choice questions from 14 manually curated templates, covering 10 physical reasoning concepts: direction, mass, height, circumference, stackable, rollable, graspable, breakable, slideable, and bounceable. | 463 | 1 |
craffel/openai_lambada | false | [] | LAMBADA dataset variant used by OpenAI to evaluate GPT-2 and GPT-3. | 1,126 | 1 |
crich/cider | false | [] | null | 133 | 0 |
cristinakuo/latino40 | false | [] | null | 133 | 0 |
crystina-z/inlang-mrtydi-corpus | false | [] | null | 1,053 | 0 |
crystina-z/inlang-mrtydi | false | [] | null | 1,056 | 0 |
crystina-z/mbert-mrtydi-corpus | false | [] | null | 1,576 | 0 |
crystina-z/mbert-mrtydi | false | [] | null | 1,584 | 0 |
crystina-z/msmarco-passage | false | [] | null | 264 | 1 |
csarron/25m-img-caps | false | [] | null | 133 | 1 |
csarron/4m-img-caps | false | [] | null | 133 | 1 |
csarron/image-captions | false | [] | null | 133 | 0 |
csebuetnlp/xlsum | false | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:my",
"language:zh",
"language:en",
"language:fr",
"language:gu",
"language:ha",
"language:hi",
"language:ig",
"language:id",
"language:ja",
"language:rn",
"language:ko",
"language:ky",
"language:mr",
"language:ne",
"language:om",
"language:ps",
"language:fa",
"language:pcm",
"language:pt",
"language:pa",
"language:ru",
"language:gd",
"language:sr",
"language:si",
"language:so",
"language:es",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:ti",
"language:tr",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:cy",
"language:yo",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"arxiv:1607.01759"
] | We present XLSum, a comprehensive and diverse dataset comprising 1.35 million professionally
annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics.
The dataset covers 45 languages ranging from low to high-resource, for many of which no
public dataset is currently available. XL-Sum is highly abstractive, concise,
and of high quality, as indicated by human and intrinsic evaluation. | 8,082 | 27 |
csebuetnlp/xnli_bn | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:bn",
"license:cc-by-nc-sa-4.0",
"arxiv:2101.00204",
"arxiv:2007.01852"
] | This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
MNLI data used in XNLI and state-of-the-art English to Bengali translation model. | 303 | 0 |
csikasote/bemba_train_dev_sets_processed | false | [] | null | 266 | 0 |
csikasote/bemba_trainset_processed | false | [] | null | 265 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.