sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
3ded52588975a96bbce202da4cdf605278e88274 |
This dataset is created by translating a part of the Stanford QA dataset.
It contains 5k QA pairs from the original SQuad dataset translated to Hindi using the googletrans api. | aneesh-b/SQuAD_Hindi | [
"license:unknown",
"region:us"
] | 2022-10-14T18:20:33+00:00 | {"license": "unknown"} | 2022-10-16T05:18:33+00:00 | [] | [] | TAGS
#license-unknown #region-us
|
This dataset is created by translating a part of the Stanford QA dataset.
It contains 5k QA pairs from the original SQuad dataset translated to Hindi using the googletrans api. | [] | [
"TAGS\n#license-unknown #region-us \n"
] |
c2c253732cadc497dd41ab0029779f7735060e52 | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rick012/celeb-identities | [
"region:us"
] | 2022-10-14T18:32:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Cristiano_Ronaldo", "1": "Jay_Z", "2": "Nicki_Minaj", "3": "Peter_Obi", "4": "Roger_Federer", "5": "Serena_Williams"}}}}], "splits": [{"name": "train", "num_bytes": 195536.0, "num_examples": 18}], "download_size": 193243, "dataset_size": 195536.0}} | 2022-10-14T18:48:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "celeb-identities"
More Information needed | [
"# Dataset Card for \"celeb-identities\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"celeb-identities\"\n\nMore Information needed"
] |
e56902acc46a67a5f18623dd73a38d6685672a3f | # Dataset Card for "BRAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/BRAD | [
"region:us"
] | 2022-10-14T18:38:23+00:00 | {"dataset_info": {"features": [{"name": "review_id", "dtype": "string"}, {"name": "book_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 1, "1": 2, "2": 3, "3": 4, "4": 5}}}}], "splits": [{"name": "train", "num_bytes": 407433642, "num_examples": 510598}], "download_size": 211213150, "dataset_size": 407433642}} | 2022-10-14T18:38:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "BRAD"
More Information needed | [
"# Dataset Card for \"BRAD\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"BRAD\"\n\nMore Information needed"
] |
4b2ea7773f47fa46fef6408a38620fd08d19e055 |
# Dataset Card for OpenSLR Nepali Large ASR Cleaned
## Table of Contents
- [Dataset Card for OpenSLR Nepali Large ASR Cleaned](#dataset-card-for-openslr-nepali-large-asr-cleaned)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [How to use?](#how-to-use)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:** [Original OpenSLR Large Nepali ASR Dataset link](https://www.openslr.org/54/)
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Sagar Sapkota](mailto:[email protected])
### Dataset Summary
This data set contains transcribed audio data for Nepali. The data set consists of flac files, and a TSV file. The file utt_spk_text.tsv contains a FileID, anonymized UserID and the transcription of audio in the file.
The data set has been manually quality-checked, but there might still be errors.
The audio files are sampled at a rate of 16KHz, and leading and trailing silences are trimmed using torchaudio's voice activity detection.
For your reference, following was the function applied on each of the original openslr utterances.
```python
import torchaudio
SAMPLING_RATE = 16000
def process_audio_file(orig_path, new_path):
"""Read and process file in `orig_path` and save it to `new_path`"""
waveform, sampling_rate = torchaudio.load(orig_path)
if sampling_rate != SAMPLING_RATE:
waveform = torchaudio.functional.resample(waveform, sampling_rate, SAMPLING_RATE)
# trim end silences with Voice Activity Detection
waveform = torchaudio.functional.vad(waveform, sample_rate=SAMPLING_RATE)
torchaudio.save(new_path, waveform, sample_rate=SAMPLING_RATE)
```
### How to use?
There are two configurations for the data: one to download the original data and the other to download the preprocessed data as described above.
1. First, to download the original dataset with HuggingFace's [Dataset](https://huggingface.co/docs/datasets/) API:
```python
from datasets import load_dataset
dataset = load_dataset("spktsagar/openslr-nepali-asr-cleaned", name="original", split='train')
```
2. To download the preprocessed dataset:
```python
from datasets import load_dataset
dataset = load_dataset("spktsagar/openslr-nepali-asr-cleaned", name="cleaned", split='train')
```
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition.
### Languages
Nepali
## Dataset Structure
### Data Instances
```js
{
'utterance_id': 'e1c4d414df',
'speaker_id': '09da0',
'utterance': {
'path': '/root/.cache/huggingface/datasets/downloads/extracted/e3cf9a618900289ecfd4a65356633d7438317f71c500cbed122960ab908e1e8a/cleaned/asr_nepali/data/e1/e1c4d414df.flac',
'array': array([-0.00192261, -0.00204468, -0.00158691, ..., 0.00323486, 0.00256348, 0.00262451], dtype=float32),
'sampling_rate': 16000
},
'transcription': '२००५ मा बिते',
'num_frames': 42300
}
```
### Data Fields
- utterance_id: a string identifying the utterances
- speaker_id: obfuscated unique id of the speaker whose utterances is in the current instance
- utterance:
- path: path to the utterance .flac file
- array: numpy array of the utterance
- sampling_rate: sample rate of the utterance
- transcription: Nepali text which spoken in the utterance
- num_frames: length of waveform array
### Data Splits
The dataset is not split. The consumer should split it as per their requirements. | spktsagar/openslr-nepali-asr-cleaned | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-14T18:44:31+00:00 | {"license": "cc-by-sa-4.0", "dataset_info": [{"config_name": "original", "features": [{"name": "utterance_id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "utterance", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "num_frames", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40925646, "num_examples": 157905}], "download_size": 9340083067, "dataset_size": 40925646}, {"config_name": "cleaned", "features": [{"name": "utterance_id", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "utterance", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "num_frames", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 40925646, "num_examples": 157905}], "download_size": 5978669282, "dataset_size": 40925646}]} | 2022-10-23T17:15:15+00:00 | [] | [] | TAGS
#license-cc-by-sa-4.0 #region-us
|
# Dataset Card for OpenSLR Nepali Large ASR Cleaned
## Table of Contents
- Dataset Card for OpenSLR Nepali Large ASR Cleaned
- Table of Contents
- Dataset Description
- Dataset Summary
- How to use?
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
## Dataset Description
- Homepage: Original OpenSLR Large Nepali ASR Dataset link
- Repository:
- Paper:
- Leaderboard:
- Point of Contact: Sagar Sapkota
### Dataset Summary
This data set contains transcribed audio data for Nepali. The data set consists of flac files, and a TSV file. The file utt_spk_text.tsv contains a FileID, anonymized UserID and the transcription of audio in the file.
The data set has been manually quality-checked, but there might still be errors.
The audio files are sampled at a rate of 16KHz, and leading and trailing silences are trimmed using torchaudio's voice activity detection.
For your reference, following was the function applied on each of the original openslr utterances.
### How to use?
There are two configurations for the data: one to download the original data and the other to download the preprocessed data as described above.
1. First, to download the original dataset with HuggingFace's Dataset API:
2. To download the preprocessed dataset:
### Supported Tasks and Leaderboards
- 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition.
### Languages
Nepali
## Dataset Structure
### Data Instances
### Data Fields
- utterance_id: a string identifying the utterances
- speaker_id: obfuscated unique id of the speaker whose utterances is in the current instance
- utterance:
- path: path to the utterance .flac file
- array: numpy array of the utterance
- sampling_rate: sample rate of the utterance
- transcription: Nepali text which spoken in the utterance
- num_frames: length of waveform array
### Data Splits
The dataset is not split. The consumer should split it as per their requirements. | [
"# Dataset Card for OpenSLR Nepali Large ASR Cleaned",
"## Table of Contents\n- Dataset Card for OpenSLR Nepali Large ASR Cleaned\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - How to use?\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits",
"## Dataset Description\n\n- Homepage: Original OpenSLR Large Nepali ASR Dataset link\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Sagar Sapkota",
"### Dataset Summary\n\nThis data set contains transcribed audio data for Nepali. The data set consists of flac files, and a TSV file. The file utt_spk_text.tsv contains a FileID, anonymized UserID and the transcription of audio in the file.\nThe data set has been manually quality-checked, but there might still be errors.\n\nThe audio files are sampled at a rate of 16KHz, and leading and trailing silences are trimmed using torchaudio's voice activity detection.\n\nFor your reference, following was the function applied on each of the original openslr utterances.",
"### How to use?\n\nThere are two configurations for the data: one to download the original data and the other to download the preprocessed data as described above.\n1. First, to download the original dataset with HuggingFace's Dataset API:\n\n\n2. To download the preprocessed dataset:",
"### Supported Tasks and Leaderboards\n\n- 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition.",
"### Languages\n\nNepali",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- utterance_id: a string identifying the utterances\n- speaker_id: obfuscated unique id of the speaker whose utterances is in the current instance\n- utterance:\n - path: path to the utterance .flac file\n - array: numpy array of the utterance\n - sampling_rate: sample rate of the utterance\n- transcription: Nepali text which spoken in the utterance\n- num_frames: length of waveform array",
"### Data Splits\n\nThe dataset is not split. The consumer should split it as per their requirements."
] | [
"TAGS\n#license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for OpenSLR Nepali Large ASR Cleaned",
"## Table of Contents\n- Dataset Card for OpenSLR Nepali Large ASR Cleaned\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - How to use?\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits",
"## Dataset Description\n\n- Homepage: Original OpenSLR Large Nepali ASR Dataset link\n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Sagar Sapkota",
"### Dataset Summary\n\nThis data set contains transcribed audio data for Nepali. The data set consists of flac files, and a TSV file. The file utt_spk_text.tsv contains a FileID, anonymized UserID and the transcription of audio in the file.\nThe data set has been manually quality-checked, but there might still be errors.\n\nThe audio files are sampled at a rate of 16KHz, and leading and trailing silences are trimmed using torchaudio's voice activity detection.\n\nFor your reference, following was the function applied on each of the original openslr utterances.",
"### How to use?\n\nThere are two configurations for the data: one to download the original data and the other to download the preprocessed data as described above.\n1. First, to download the original dataset with HuggingFace's Dataset API:\n\n\n2. To download the preprocessed dataset:",
"### Supported Tasks and Leaderboards\n\n- 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition.",
"### Languages\n\nNepali",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- utterance_id: a string identifying the utterances\n- speaker_id: obfuscated unique id of the speaker whose utterances is in the current instance\n- utterance:\n - path: path to the utterance .flac file\n - array: numpy array of the utterance\n - sampling_rate: sample rate of the utterance\n- transcription: Nepali text which spoken in the utterance\n- num_frames: length of waveform array",
"### Data Splits\n\nThe dataset is not split. The consumer should split it as per their requirements."
] |
da93d7ca5f81aaae854ade8bcaf8147a6d0a0cb5 | from datasets import load_dataset
dataset = load_dataset("Ariela/muneca-papel")
| Ariela/muneca-papel | [
"license:unknown",
"region:us"
] | 2022-10-14T18:44:36+00:00 | {"license": "unknown"} | 2022-10-15T18:56:12+00:00 | [] | [] | TAGS
#license-unknown #region-us
| from datasets import load_dataset
dataset = load_dataset("Ariela/muneca-papel")
| [] | [
"TAGS\n#license-unknown #region-us \n"
] |
c4a17a7a5dbacb594c23e8ff0aafca7250121013 | # Dataset Card for "OSACT4_hatespeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/OSACT4_hatespeech | [
"region:us"
] | 2022-10-14T18:48:30+00:00 | {"dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "offensive", "dtype": "string"}, {"name": "hate", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1417732, "num_examples": 6838}, {"name": "validation", "num_bytes": 204725, "num_examples": 999}], "download_size": 802812, "dataset_size": 1622457}} | 2022-10-14T18:48:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "OSACT4_hatespeech"
More Information needed | [
"# Dataset Card for \"OSACT4_hatespeech\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"OSACT4_hatespeech\"\n\nMore Information needed"
] |
37c7175b2b6f07d4c749f7390ce9784e999aa1d5 | # Dataset Card for "Sentiment_Lexicons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Sentiment_Lexicons | [
"region:us"
] | 2022-10-14T18:56:58+00:00 | {"dataset_info": {"features": [{"name": "Term", "dtype": "string"}, {"name": "bulkwalter", "dtype": "string"}, {"name": "sentiment_score", "dtype": "string"}, {"name": "positive_occurrence_count", "dtype": "string"}, {"name": "negative_occurrence_count", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2039703, "num_examples": 43308}], "download_size": 1068103, "dataset_size": 2039703}} | 2022-10-14T18:57:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Sentiment_Lexicons"
More Information needed | [
"# Dataset Card for \"Sentiment_Lexicons\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Sentiment_Lexicons\"\n\nMore Information needed"
] |
e43dbe88d29779bc0440e214fc4de451d22392bc | ## Córpus de Complexidade Textual para Estágios Escolares do Sistema Educacional Brasileiro
O córpus inclui trechos de: livros-textos cuja lista completa é apresentada abaixo, notícias da Seção Para Seu Filho Ler (PSFL) do jornal Zero Hora que apresenta algumas notícias sobre o mesmo córpus do jornal do Zero Hora, mas escritas para crianças de 8 a 11 anos de idade , Exames do SAEB , Livros Digitais do Wikilivros em Português, Exames do Enem dos anos 2015, 2016 e 2017. Todo o material em português foi disponibilizado para avaliar a tarefa de complexidade textual (readability).
Lista completa dos Livros Didáticos e suas fontes originais
Esse corpus faz parte dos recursos de meu doutorado na área de Natural Language Processing, sendo realizado no Núcleo Interinstitucional de Linguística Computacional da USP de São Carlos. Esse trabalho foi orientado pela Profa. Sandra Maria Aluísio.
http://nilc.icmc.usp.br
@inproceedings{mgazzola19,
title={Predição da Complexidade Textual de Recursos Educacionais Abertos em Português},
author={Murilo Gazzola, Sidney Evaldo Leal, Sandra Maria Aluisio},
booktitle={Proceedings of the Brazilian Symposium in Information and Human Language Technology},
year={2019}
} | tiagoblima/nilc-school-books | [
"license:mit",
"region:us"
] | 2022-10-14T20:09:32+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "level", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1276559.048483246, "num_examples": 8321}, {"name": "train", "num_bytes": 4595060.28364021, "num_examples": 29952}, {"name": "validation", "num_bytes": 510715.6678765444, "num_examples": 3329}], "download_size": 3645953, "dataset_size": 6382335.0}} | 2022-11-13T01:03:20+00:00 | [] | [] | TAGS
#license-mit #region-us
| ## Córpus de Complexidade Textual para Estágios Escolares do Sistema Educacional Brasileiro
O córpus inclui trechos de: livros-textos cuja lista completa é apresentada abaixo, notícias da Seção Para Seu Filho Ler (PSFL) do jornal Zero Hora que apresenta algumas notícias sobre o mesmo córpus do jornal do Zero Hora, mas escritas para crianças de 8 a 11 anos de idade , Exames do SAEB , Livros Digitais do Wikilivros em Português, Exames do Enem dos anos 2015, 2016 e 2017. Todo o material em português foi disponibilizado para avaliar a tarefa de complexidade textual (readability).
Lista completa dos Livros Didáticos e suas fontes originais
Esse corpus faz parte dos recursos de meu doutorado na área de Natural Language Processing, sendo realizado no Núcleo Interinstitucional de Linguística Computacional da USP de São Carlos. Esse trabalho foi orientado pela Profa. Sandra Maria Aluísio.
URL
@inproceedings{mgazzola19,
title={Predição da Complexidade Textual de Recursos Educacionais Abertos em Português},
author={Murilo Gazzola, Sidney Evaldo Leal, Sandra Maria Aluisio},
booktitle={Proceedings of the Brazilian Symposium in Information and Human Language Technology},
year={2019}
} | [
"## Córpus de Complexidade Textual para Estágios Escolares do Sistema Educacional Brasileiro\n\nO córpus inclui trechos de: livros-textos cuja lista completa é apresentada abaixo, notícias da Seção Para Seu Filho Ler (PSFL) do jornal Zero Hora que apresenta algumas notícias sobre o mesmo córpus do jornal do Zero Hora, mas escritas para crianças de 8 a 11 anos de idade , Exames do SAEB , Livros Digitais do Wikilivros em Português, Exames do Enem dos anos 2015, 2016 e 2017. Todo o material em português foi disponibilizado para avaliar a tarefa de complexidade textual (readability).\n\nLista completa dos Livros Didáticos e suas fontes originais\n\nEsse corpus faz parte dos recursos de meu doutorado na área de Natural Language Processing, sendo realizado no Núcleo Interinstitucional de Linguística Computacional da USP de São Carlos. Esse trabalho foi orientado pela Profa. Sandra Maria Aluísio.\n\nURL\n\n@inproceedings{mgazzola19,\n title={Predição da Complexidade Textual de Recursos Educacionais Abertos em Português},\n author={Murilo Gazzola, Sidney Evaldo Leal, Sandra Maria Aluisio},\n booktitle={Proceedings of the Brazilian Symposium in Information and Human Language Technology},\n year={2019}\n}"
] | [
"TAGS\n#license-mit #region-us \n",
"## Córpus de Complexidade Textual para Estágios Escolares do Sistema Educacional Brasileiro\n\nO córpus inclui trechos de: livros-textos cuja lista completa é apresentada abaixo, notícias da Seção Para Seu Filho Ler (PSFL) do jornal Zero Hora que apresenta algumas notícias sobre o mesmo córpus do jornal do Zero Hora, mas escritas para crianças de 8 a 11 anos de idade , Exames do SAEB , Livros Digitais do Wikilivros em Português, Exames do Enem dos anos 2015, 2016 e 2017. Todo o material em português foi disponibilizado para avaliar a tarefa de complexidade textual (readability).\n\nLista completa dos Livros Didáticos e suas fontes originais\n\nEsse corpus faz parte dos recursos de meu doutorado na área de Natural Language Processing, sendo realizado no Núcleo Interinstitucional de Linguística Computacional da USP de São Carlos. Esse trabalho foi orientado pela Profa. Sandra Maria Aluísio.\n\nURL\n\n@inproceedings{mgazzola19,\n title={Predição da Complexidade Textual de Recursos Educacionais Abertos em Português},\n author={Murilo Gazzola, Sidney Evaldo Leal, Sandra Maria Aluisio},\n booktitle={Proceedings of the Brazilian Symposium in Information and Human Language Technology},\n year={2019}\n}"
] |
c2f48f68766a519e06a81cbc405d36dd4762d785 | # Dataset Card for "Commonsense_Validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Commonsense_Validation | [
"region:us"
] | 2022-10-14T20:52:13+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "first_sentence", "dtype": "string"}, {"name": "second_sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 0, "1": 1}}}}], "splits": [{"name": "train", "num_bytes": 1420233, "num_examples": 10000}, {"name": "validation", "num_bytes": 133986, "num_examples": 1000}], "download_size": 837486, "dataset_size": 1554219}} | 2022-10-14T20:52:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Commonsense_Validation"
More Information needed | [
"# Dataset Card for \"Commonsense_Validation\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Commonsense_Validation\"\n\nMore Information needed"
] |
fed92167f9ae45fac1207017212a0c5bc6da02cd | # Dataset Card for "arastance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/arastance | [
"region:us"
] | 2022-10-14T21:14:14+00:00 | {"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "claim_url", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "stance", "dtype": {"class_label": {"names": {"0": "Discuss", "1": "Disagree", "2": "Unrelated", "3": "Agree"}}}}, {"name": "article_title", "dtype": "string"}, {"name": "article_url", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 5611165, "num_examples": 646}, {"name": "train", "num_bytes": 29682402, "num_examples": 2848}, {"name": "validation", "num_bytes": 7080226, "num_examples": 569}], "download_size": 18033579, "dataset_size": 42373793}} | 2022-10-14T21:14:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "arastance"
More Information needed | [
"# Dataset Card for \"arastance\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"arastance\"\n\nMore Information needed"
] |
f89f0029a9dd992ff5e43eadde0ac821406d9cbe | # Dataset Card for "TUNIZI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/TUNIZI | [
"region:us"
] | 2022-10-14T21:28:41+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 188084, "num_examples": 2997}], "download_size": 127565, "dataset_size": 188084}} | 2022-10-14T21:28:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "TUNIZI"
More Information needed | [
"# Dataset Card for \"TUNIZI\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"TUNIZI\"\n\nMore Information needed"
] |
d25e904472d19ac8cb639bff14cd59f31a90991b | # Dataset Card for "AQAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/AQAD | [
"region:us"
] | 2022-10-14T21:35:33+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 23343014, "num_examples": 17911}], "download_size": 3581662, "dataset_size": 23343014}} | 2022-10-14T21:35:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "AQAD"
More Information needed | [
"# Dataset Card for \"AQAD\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"AQAD\"\n\nMore Information needed"
] |
e9674e9345c66631d1cd1f89ca1f00d8ae119c4f | # Dataset Card for "MArSum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/MArSum | [
"region:us"
] | 2022-10-14T21:42:30+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3332778, "num_examples": 1981}], "download_size": 1743254, "dataset_size": 3332778}} | 2022-10-14T21:42:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "MArSum"
More Information needed | [
"# Dataset Card for \"MArSum\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"MArSum\"\n\nMore Information needed"
] |
d337fbd0337b6eda3282433826f037770ee94f69 | # Dataset Card for "arabicReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omerist/arabicReviews-ds-mini | [
"region:us"
] | 2022-10-14T22:25:48+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "content_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11505614.4, "num_examples": 3600}, {"name": "validation", "num_bytes": 1278401.6, "num_examples": 400}], "download_size": 6325726, "dataset_size": 12784016.0}} | 2022-10-14T22:53:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "arabicReviews-ds-mini"
More Information needed | [
"# Dataset Card for \"arabicReviews-ds-mini\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"arabicReviews-ds-mini\"\n\nMore Information needed"
] |
8068419f931b965fce6f7ee08a2ad07d7397d039 |
# Dataset Card for Dicionário Português
It is a list of portuguese words with its inflections
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/pt-all-words")
remote_dataset
```
| VanessaSchenkel/pt-all-words | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:pt",
"region:us"
] | 2022-10-14T23:52:20+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["other", "text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "sbwce", "pretty_name": "Dicion\u00e1rio em Portugu\u00eas", "tags": []} | 2022-10-15T00:59:29+00:00 | [] | [
"pt"
] | TAGS
#task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Portuguese #region-us
|
# Dataset Card for Dicionário Português
It is a list of portuguese words with its inflections
How to use it:
| [
"# Dataset Card for Dicionário Português\nIt is a list of portuguese words with its inflections\nHow to use it:"
] | [
"TAGS\n#task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Portuguese #region-us \n",
"# Dataset Card for Dicionário Português\nIt is a list of portuguese words with its inflections\nHow to use it:"
] |
d5c7c07268056a1b294d5815bdf012f92c327c1d | # Dataset Card for "arab-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omerist/arab-ds-mini | [
"region:us"
] | 2022-10-15T00:12:24+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 87011869.13722204, "num_examples": 27116}, {"name": "validation", "num_bytes": 9668342.001417983, "num_examples": 3013}], "download_size": 49392988, "dataset_size": 96680211.13864002}} | 2022-10-15T00:12:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "arab-ds-mini"
More Information needed | [
"# Dataset Card for \"arab-ds-mini\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"arab-ds-mini\"\n\nMore Information needed"
] |
2eefce06256e84521bdff3e3a0df0248bd28cb27 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jets](https://huggingface.co/Jets) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-ea058a-1765461442 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T05:28:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-15T05:31:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Jets for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Jets for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Jets for evaluating this model."
] |
2ccad53104e75b5ec10f8abc1ac16f4c5f7ea384 |
# Dataset Card for uneune_image1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
今まで私が描いたイラスト100枚のデータセットです。
512×512にトリミングしてあります。
さっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。
This is a data set of 100 illustrations I have drawn so far.
Cropped to 512x512.
I wanted a dataset that can be used for learning with stableDiffusion, so I made it. | une/uneune_image1 | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-15T07:41:22+00:00 | {"license": "cc-by-4.0"} | 2022-10-15T08:07:58+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# Dataset Card for uneune_image1
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
今まで私が描いたイラスト100枚のデータセットです。
512×512にトリミングしてあります。
さっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。
This is a data set of 100 illustrations I have drawn so far.
Cropped to 512x512.
I wanted a dataset that can be used for learning with stableDiffusion, so I made it. | [
"# Dataset Card for uneune_image1",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\n今まで私が描いたイラスト100枚のデータセットです。\n512×512にトリミングしてあります。\nさっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。\n\nThis is a data set of 100 illustrations I have drawn so far.\nCropped to 512x512.\nI wanted a dataset that can be used for learning with stableDiffusion, so I made it."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Dataset Card for uneune_image1",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\n今まで私が描いたイラスト100枚のデータセットです。\n512×512にトリミングしてあります。\nさっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。\n\nThis is a data set of 100 illustrations I have drawn so far.\nCropped to 512x512.\nI wanted a dataset that can be used for learning with stableDiffusion, so I made it."
] |
3c01cebd3e2d75dbf0987f1bc4c2b424923d733d | language: ["Urdu"] | Harsit/xnli2.0_train_urdu | [
"region:us"
] | 2022-10-15T08:26:47+00:00 | {} | 2022-10-15T08:30:11+00:00 | [] | [] | TAGS
#region-us
| language: ["Urdu"] | [] | [
"TAGS\n#region-us \n"
] |
d563042b2a16501be4c7eeb7b71998db3a24adec | # Dataset Card for "turknews-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omerist/turknews-mini | [
"region:us"
] | 2022-10-15T11:38:03+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9064933.18105424, "num_examples": 3534}, {"name": "validation", "num_bytes": 1008069.8189457601, "num_examples": 393}], "download_size": 5732599, "dataset_size": 10073003.0}} | 2022-10-15T11:38:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "turknews-mini"
More Information needed | [
"# Dataset Card for \"turknews-mini\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"turknews-mini\"\n\nMore Information needed"
] |
c15baed0307c4fcc7b375258a182ea49ef2d4e8b | # Dataset Card for "balloon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nielsr/balloon | [
"region:us"
] | 2022-10-15T11:59:06+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 30808803.0, "num_examples": 61}, {"name": "validation", "num_bytes": 8076058.0, "num_examples": 13}], "download_size": 38814125, "dataset_size": 38884861.0}} | 2022-10-15T12:02:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "balloon"
More Information needed | [
"# Dataset Card for \"balloon\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"balloon\"\n\nMore Information needed"
] |
519c6f85f8dc6cbbf4878ebdb71dd39054c5357d | topia
Sport
topia
Documentaire
topia
Song Of Topia
topia | Sethyyann3572/glue-topia | [
"license:openrail",
"region:us"
] | 2022-10-15T12:31:25+00:00 | {"license": "openrail"} | 2022-10-15T12:32:42+00:00 | [] | [] | TAGS
#license-openrail #region-us
| topia
Sport
topia
Documentaire
topia
Song Of Topia
topia | [] | [
"TAGS\n#license-openrail #region-us \n"
] |
a0bd554a17af724da30bd7b22b77022d9cb67991 | # Dataset Card for "celebrity_in_movie_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | deman539/celebrity_in_movie_demo | [
"region:us"
] | 2022-10-15T12:33:39+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "output"}}}}], "splits": [{"name": "train", "num_bytes": 2237547.0, "num_examples": 5}], "download_size": 1373409, "dataset_size": 2237547.0}} | 2022-10-15T13:50:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "celebrity_in_movie_demo"
More Information needed | [
"# Dataset Card for \"celebrity_in_movie_demo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"celebrity_in_movie_demo\"\n\nMore Information needed"
] |
fcd42e249fed48dbd1d3b9b969528ef9298d3464 |
# Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by [Allison Parrish](https://www.decontextualize.com/). Please visit Allison's fantastic [accompanying GitHub repository](https://github.com/aparrish/gutenberg-poetry-corpus) for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding `gutenberg_id` (1191 unique values) from project Gutenberg.
```python
Dataset({
features: ['line', 'gutenberg_id'],
num_rows: 3085117
})
```
A row of data looks like this:
```python
{'line': 'And retreated, baffled, beaten,', 'gutenberg_id': 19}
```
| biglam/gutenberg-poetry-corpus | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"poetry",
"stylistics",
"poems",
"gutenberg",
"region:us"
] | 2022-10-15T12:42:22+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Gutenberg Poetry Corpus", "tags": ["poetry", "stylistics", "poems", "gutenberg"]} | 2022-10-18T09:53:52+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #poetry #stylistics #poems #gutenberg #region-us
|
# Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by Allison Parrish. Please visit Allison's fantastic accompanying GitHub repository for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding 'gutenberg_id' (1191 unique values) from project Gutenberg.
A row of data looks like this:
| [
"# Allison Parrish's Gutenberg Poetry Corpus\n\nThis corpus was originally published under the CC0 license by Allison Parrish. Please visit Allison's fantastic accompanying GitHub repository for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.\n\nThis dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding 'gutenberg_id' (1191 unique values) from project Gutenberg.\n\n\nA row of data looks like this:"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #poetry #stylistics #poems #gutenberg #region-us \n",
"# Allison Parrish's Gutenberg Poetry Corpus\n\nThis corpus was originally published under the CC0 license by Allison Parrish. Please visit Allison's fantastic accompanying GitHub repository for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.\n\nThis dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding 'gutenberg_id' (1191 unique values) from project Gutenberg.\n\n\nA row of data looks like this:"
] |
e078a9a8bb873844031a65f6a0cc198ddcc1c6a5 | ## Dataset Summary
Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the [Unsplash 25K](https://github.com/unsplash/datasets) data set.
## Dataset Description
- **Repository:** [https://github.com/sniafas/photography-style-analysis](https://github.com/sniafas/photography-style-analysis)
- **Paper:** [More Information Needed](https://www.researchgate.net/publication/355917312_Photography_Style_Analysis_using_Machine_Learning)
### Citation Information
```
@article{sniafas2021,
title={DoF: An image dataset for depth of field classification},
author={Niafas, Stavros},
doi= {10.13140/RG.2.2.29880.62722},
url= {https://www.researchgate.net/publication/364356051_DoF_depth_of_field_datase}
year={2021}
}
```
Note that each DoF dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. | svnfs/depth-of-field | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"annotations_creators:Stavros Niafas",
"license:apache-2.0",
"region:us"
] | 2022-10-15T12:57:29+00:00 | {"annotations_creators": ["Stavros Niafas"], "license": "apache-2.0", "task_categories": ["image-classification", "image-segmentation"], "sample_number": [1200], "class_number": [2], "image_size": ["(200,300,3)"], "source_dataset": ["unsplash"], "dataset_info": [{"config_name": "depth-of-field", "features": [{"name": "image", "dtype": "string"}, {"name": "class", "dtype": {"class_label": {"names": {"0": "bokeh", "1": "no-bokeh"}}}}]}, {"config_name": "default", "features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 192150, "num_examples": 1200}], "download_size": 38792692, "dataset_size": 192150}]} | 2022-11-13T23:33:39+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-image-segmentation #annotations_creators-Stavros Niafas #license-apache-2.0 #region-us
| ## Dataset Summary
Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the Unsplash 25K data set.
## Dataset Description
- Repository: URL
- Paper:
Note that each DoF dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. | [
"## Dataset Summary\n\nDepth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the Unsplash 25K data set.",
"## Dataset Description\n\n- Repository: URL\n- Paper: \n\n\n\nNote that each DoF dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset."
] | [
"TAGS\n#task_categories-image-classification #task_categories-image-segmentation #annotations_creators-Stavros Niafas #license-apache-2.0 #region-us \n",
"## Dataset Summary\n\nDepth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the Unsplash 25K data set.",
"## Dataset Description\n\n- Repository: URL\n- Paper: \n\n\n\nNote that each DoF dataset has its own citation. Please see the source to\nget the correct citation for each contained dataset."
] |
75321e3f022839c10b67ba9c08bb6efac8e17aca | # Dataset Card for "clothes_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ghoumrassi/clothes_sample | [
"region:us"
] | 2022-10-15T14:50:15+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20078406.0, "num_examples": 990}], "download_size": 0, "dataset_size": 20078406.0}} | 2022-10-15T17:07:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "clothes_sample"
More Information needed | [
"# Dataset Card for \"clothes_sample\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"clothes_sample\"\n\nMore Information needed"
] |
540de892a1be8640934c938b4177e1de14ca3559 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2-xl
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@rololbot](https://huggingface.co/rololbot) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-4df82b-1769161494 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T15:00:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "gpt2-xl", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-15T15:03:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2-xl
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @rololbot for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: gpt2-xl\n* Dataset: inverse-scaling/NeQA\n* Config: inverse-scaling--NeQA\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @rololbot for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: gpt2-xl\n* Dataset: inverse-scaling/NeQA\n* Config: inverse-scaling--NeQA\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @rololbot for evaluating this model."
] |
efce2cf816cf1abad0c590e9e737e5289e1f9394 | # Dataset Card for "Iraqi_Dialect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Iraqi_Dialect | [
"region:us"
] | 2022-10-15T20:16:56+00:00 | {"dataset_info": {"features": [{"name": "No.", "dtype": "string"}, {"name": " Tex", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "IDK", "2": "N", "3": "True"}}}}], "splits": [{"name": "train", "num_bytes": 365478, "num_examples": 1672}], "download_size": 134999, "dataset_size": 365478}} | 2022-10-15T20:17:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Iraqi_Dialect"
More Information needed | [
"# Dataset Card for \"Iraqi_Dialect\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Iraqi_Dialect\"\n\nMore Information needed"
] |
ee7fc57264b8056f8341f8215e5307a680a78f0a | # Dataset Card for "Sudanese_Dialect_Tweet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Sudanese_Dialect_Tweet | [
"region:us"
] | 2022-10-15T20:39:50+00:00 | {"dataset_info": {"features": [{"name": "Tweet", "dtype": "string"}, {"name": "Annotator 1", "dtype": "string"}, {"name": "Annotator 2", "dtype": "string"}, {"name": "Annotator 3", "dtype": "string"}, {"name": "Mode", "dtype": "string"}, {"name": "Date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 345088, "num_examples": 2123}], "download_size": 141675, "dataset_size": 345088}} | 2022-10-15T20:40:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Sudanese_Dialect_Tweet"
More Information needed | [
"# Dataset Card for \"Sudanese_Dialect_Tweet\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Sudanese_Dialect_Tweet\"\n\nMore Information needed"
] |
8e2e32d0832c597e4ba2b1f252e59cec765a8c37 | # Dataset Card for "Sudanese_Dialect_Tweet_Tele"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Sudanese_Dialect_Tweet_Tele | [
"region:us"
] | 2022-10-15T20:47:08+00:00 | {"dataset_info": {"features": [{"name": "Tweet ID", "dtype": "string"}, {"name": "Tweet Text", "dtype": "string"}, {"name": "Date", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NEGATIVE", "1": "POSITIVE", "2": "OBJECTIVE"}}}}], "splits": [{"name": "train", "num_bytes": 872272, "num_examples": 5346}], "download_size": 353611, "dataset_size": 872272}} | 2022-10-15T20:47:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Sudanese_Dialect_Tweet_Tele"
More Information needed | [
"# Dataset Card for \"Sudanese_Dialect_Tweet_Tele\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Sudanese_Dialect_Tweet_Tele\"\n\nMore Information needed"
] |
1bf5e6c1c2761f004eb867b20ad5d8a173ace8da | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961515 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T20:52:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-15T20:53:08+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
8b2593845c16fa3deed61cb75900f4d472fc90f5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961516 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-15T20:52:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-15T20:53:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
cd13b81d7a5f2a2097052eee7be3652d71c7e698 | # Dataset Card for "cheques_sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shivi/cheques_sample_data | [
"region:us"
] | 2022-10-15T21:25:47+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 7518544.0, "num_examples": 400}, {"name": "train", "num_bytes": 56481039.4, "num_examples": 2800}, {"name": "validation", "num_bytes": 15034990.0, "num_examples": 800}], "download_size": 58863727, "dataset_size": 79034573.4}} | 2022-11-05T21:31:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cheques_sample_data"
More Information needed | [
"# Dataset Card for \"cheques_sample_data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cheques_sample_data\"\n\nMore Information needed"
] |
c14be6279b7e817d010409aaad46df114f0af3f5 | # Dataset Card for "Satirical_Fake_News"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Satirical_Fake_News | [
"region:us"
] | 2022-10-15T21:37:45+00:00 | {"dataset_info": {"features": [{"name": "Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6131349, "num_examples": 3221}], "download_size": 3223892, "dataset_size": 6131349}} | 2022-10-15T21:37:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Satirical_Fake_News"
More Information needed | [
"# Dataset Card for \"Satirical_Fake_News\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Satirical_Fake_News\"\n\nMore Information needed"
] |
4be22018d039ee657dbeb7ff2e62fc9ae8eefdb6 | # Dataset Card for "NArabizi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/NArabizi | [
"region:us"
] | 2022-10-15T21:47:54+00:00 | {"dataset_info": {"features": [{"name": "ID", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NEU", "1": "NEG", "2": "MIX", "3": "POS"}}}}], "splits": [{"name": "test", "num_bytes": 4034, "num_examples": 144}, {"name": "train", "num_bytes": 27839, "num_examples": 998}, {"name": "validation", "num_bytes": 3823, "num_examples": 137}], "download_size": 12217, "dataset_size": 35696}} | 2022-10-15T21:48:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "NArabizi"
More Information needed | [
"# Dataset Card for \"NArabizi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"NArabizi\"\n\nMore Information needed"
] |
619c18ba46019c28099c82a430e773e98471b5db | # Dataset Card for "ArSAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/ArSAS | [
"region:us"
] | 2022-10-15T21:51:23+00:00 | {"dataset_info": {"features": [{"name": "#Tweet_ID", "dtype": "string"}, {"name": "Tweet_text", "dtype": "string"}, {"name": "Topic", "dtype": "string"}, {"name": "Sentiment_label_confidence", "dtype": "string"}, {"name": "Speech_act_label", "dtype": "string"}, {"name": "Speech_act_label_confidence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Negative", "1": "Neutral", "2": "Positive", "3": "Mixed"}}}}], "splits": [{"name": "train", "num_bytes": 6147723, "num_examples": 19897}], "download_size": 2998319, "dataset_size": 6147723}} | 2022-10-15T21:51:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ArSAS"
More Information needed | [
"# Dataset Card for \"ArSAS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ArSAS\"\n\nMore Information needed"
] |
0281194d215c73170d30add87e5f16f9dec1d641 |
# Dataset Card for OLM September/October 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the September/October 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. | olm/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"pretraining",
"language modelling",
"common crawl",
"web",
"region:us"
] | 2022-10-16T02:32:35+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM September/October 2022 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]} | 2022-11-04T17:14:25+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #language-English #pretraining #language modelling #common crawl #web #region-us
|
# Dataset Card for OLM September/October 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo here from 16% of the September/October 2022 Common Crawl snapshot.
Note: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'. | [
"# Dataset Card for OLM September/October 2022 Common Crawl\n\nCleaned and deduplicated pretraining dataset, created with the OLM repo here from 16% of the September/October 2022 Common Crawl snapshot.\n\nNote: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'."
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #language-English #pretraining #language modelling #common crawl #web #region-us \n",
"# Dataset Card for OLM September/October 2022 Common Crawl\n\nCleaned and deduplicated pretraining dataset, created with the OLM repo here from 16% of the September/October 2022 Common Crawl snapshot.\n\nNote: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'."
] |
d370089b399492cc158548e9589fc3af76f4712a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-39d19a-1775961623 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T10:36:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener-br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T10:37:33+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
2bf5a7e1402a6f32c2073a75c61d75f4c9cca2e7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@CG80499](https://huggingface.co/CG80499) for evaluating this model. | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-e4c053-1775661622 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T10:44:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["inverse-scaling/NeQA"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "inverse-scaling/NeQA", "dataset_config": "inverse-scaling--NeQA", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-16T10:47:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @CG80499 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: inverse-scaling/NeQA\n* Config: inverse-scaling--NeQA\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @CG80499 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: inverse-scaling/NeQA\n* Config: inverse-scaling--NeQA\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @CG80499 for evaluating this model."
] |
1261b0ab43c1f488e329bb4b8e0fae03ece768c4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161639 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:07:26+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener-br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:08:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
ac7d6e4063103d3c15fbef1983c89e4760be6f4f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161640 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:07:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:08:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
7fac43a456593157221805407acd8171014c9259 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161641 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:07:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-large-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:08:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-large-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-large-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
8a6f9b98bdf89c8fef01ee76b1fab91d5ce74981 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161642 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:07:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:08:35+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
799c1d1c06895d834d846f0c09bbff283499a0ca | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161643 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:07:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:09:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
1f70cbec44ea6b75058fdad68ff55b8de9d4a522 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861660 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:48:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-large-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:52:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-large-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-large-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
230cdaa657e88026dab1f182c34af5653d8a55ef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861659 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:48:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener_br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:51:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener_br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
7f85845a7030e1397ee63b931b90de06a6ee7847 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861661 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:48:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:51:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
4ac218a6129db895ec2ed0e960154742245b0d61 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861662 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T11:48:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T11:52:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
2ad1b139cd7f1240d4046d69387149f0d2f52938 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961678 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:18:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-base-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T12:19:26+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
cffeb21da0785a570afcf98be56916319f867852 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961679 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:18:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-large-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T12:19:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
530cf9d0b4b5d10007e8722680b6175b5d11d4bb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061680 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:18:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-base-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T12:19:34+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
0679fb25b4bb759691c22388d45706c8f85ba4b2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061681 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:18:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-large-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T12:20:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
449dcb775af777bad2fe5cb43070e97c76f65e05 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161682 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:19:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-base-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T12:21:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
13f46098ec2521c788887fd931319674601c0f47 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161683 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-16T12:19:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "pierreguillou/ner-bert-large-cased-pt-lenerbr", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-16T12:22:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
84e4f21a1e84ae47897a32f8177d4d096c2630f1 | # Dataset Card for "punctuation-nilc-t5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tiagoblima/punctuation-nilc-t5 | [
"region:us"
] | 2022-10-16T16:02:13+00:00 | {"dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "text_input", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1209863.2760485518, "num_examples": 2604}, {"name": "train", "num_bytes": 4340741.560763889, "num_examples": 9371}, {"name": "validation", "num_bytes": 491897.36016821867, "num_examples": 1041}], "download_size": 3084741, "dataset_size": 6042502.196980659}} | 2022-11-13T18:07:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "punctuation-nilc-t5"
More Information needed | [
"# Dataset Card for \"punctuation-nilc-t5\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"punctuation-nilc-t5\"\n\nMore Information needed"
] |
0f9daa96611fe978caa9d14ca1b9d07f99380ccc |
## ESC benchmark diagnostic dataset
## Dataset Summary
As a part of ESC benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.
All eight datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
esc_diagnostic = load_dataset("esc-benchmark/esc-diagnostic-dataset")
```
Datasets provided as splits, so to have clean diagnostic subset of AMI:
```python
ami_diagnostic_clean = esc_diagnostic["ami.clean"]
```
Splits are: `"ami.clean"`, `"ami.other"`, `"earnings22.clean"`, `"earnings22.other"`, `"tedlium.clean"`, `"tedlium.other"`, `"voxpopuli.clean"`, `"voxpopuli.other"`, `"spgispeech.clean"`, `"spgispeech.other"`, `"gigaspeech.clean"`, `"gigaspeech.other"`, `"librispeech.clean"`, `"librispeech.other"`, `"common_voice.clean"`, `"common_voice.other"`.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(esc_diagnostic[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'audio': {'path': None,
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical',
'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical',
'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005'
}
```
### Data Fields
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `ortho_transcript`: the orthographic transcription of the audio file.
-
- `norm_transcript`: the normalized transcription of the audio file.
- `id`: unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring.
### Access
All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Diagnostic Dataset
ESC contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esc-bench/esc-diagnostic-dataset](https://huggingface.co/datasets/esc-bench/esc-diagnostic-datasets).
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esc-benchmark/esc-datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esc-benchmark/esc-datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esc-benchmark/esc-datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esc-benchmark/esc-datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test` | esc-bench/esc-diagnostic-backup | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|common_voice",
"language:en",
"license:cc-by-4.0",
"license:apache-2.0",
"license:cc0-1.0",
"license:cc-by-nc-3.0",
"license:other",
"asr",
"benchmark",
"speech",
"esc",
"region:us"
] | 2022-10-16T16:31:24+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "ESC Diagnostic Dataset", "tags": ["asr", "benchmark", "speech", "esc"], "extra_gated_prompt": "Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}} | 2022-10-17T14:05:05+00:00 | [] | [
"en"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-English #license-cc-by-4.0 #license-apache-2.0 #license-cc0-1.0 #license-cc-by-nc-3.0 #license-other #asr #benchmark #speech #esc #region-us
|
## ESC benchmark diagnostic dataset
## Dataset Summary
As a part of ESC benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.
All eight datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
Datasets provided as splits, so to have clean diagnostic subset of AMI:
Splits are: '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"URL"', '"common_voice.clean"', '"common_voice.other"'.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through 'load_dataset':
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
### Data Fields
- 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- 'ortho_transcript': the orthographic transcription of the audio file.
-
- 'norm_transcript': the normalized transcription of the audio file.
- 'id': unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, i.e. 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to URL for scoring.
### Access
All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: URL
* GigaSpeech: URL
* SPGISpeech: URL
### Diagnostic Dataset
ESC contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: esc-bench/esc-diagnostic-dataset.
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.
Example Usage:
Train/validation splits:
- 'train' (combination of 'URL.100', 'URL.360' and 'URL.500')
- 'URL'
- 'URL'
Test splits:
- 'URL'
- 'URL'
Also available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:
- 'clean.100': 100 hours of training data from the 'clean' subset
- 'clean.360': 360 hours of training data from the 'clean' subset
- 'other.500': 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
Example usage:
Training/validation splits:
- 'train'
- 'validation'
Test splits:
- 'test'
## VoxPopuli
VoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
Training/validation splits:
- 'train'
- 'validation'
Test splits:
- 'test'
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
Training/validation splits:
- 'train'
- 'validation'
Test splits:
- 'test'
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
Training/validation splits:
- 'train' ('l' subset of training data (2,500 h))
- 'validation'
Test splits:
- 'test'
Also available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:
- 'xs': extra-small subset of training data (10 h)
- 's': small subset of training data (250 h)
- 'm': medium subset of training data (1,000 h)
- 'xl': extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
Training/validation splits:
- 'train' ('l' subset of training data (~5,000 h))
- 'validation'
Test splits:
- 'test'
Also available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:
- 's': small subset of training data (~200 h)
- 'm': medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
Training/validation splits:
- 'train'
- 'validation'
Test splits:
- 'test'
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
Training/validation splits:
- 'train'
- 'validation'
Test splits:
- 'test' | [
"## ESC benchmark diagnostic dataset",
"## Dataset Summary\n\nAs a part of ESC benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.\n\nAll eight datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:\n\n\n\nDatasets provided as splits, so to have clean diagnostic subset of AMI:\n\n\n\nSplits are: '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"common_voice.clean\"', '\"common_voice.other\"'. \n\nThe datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.",
"## Dataset Information\n\nA data point can be accessed by indexing the dataset object loaded through 'load_dataset':\n\n\n\nA typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:",
"### Data Fields\n\n- 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.\n\n- 'ortho_transcript': the orthographic transcription of the audio file.\n- \n- 'norm_transcript': the normalized transcription of the audio file.\n\n- 'id': unique id of the data sample.",
"### Data Preparation",
"#### Audio\nThe audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.\n\nNote that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.",
"#### Transcriptions\nThe transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.\n\nTranscriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to URL for scoring.",
"### Access\nAll eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:\n* Common Voice: URL\n* GigaSpeech: URL\n* SPGISpeech: URL",
"### Diagnostic Dataset\nESC contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: esc-bench/esc-diagnostic-dataset.",
"## LibriSpeech\n\nThe LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.\n\nExample Usage:\n\n\n\nTrain/validation splits:\n- 'train' (combination of 'URL.100', 'URL.360' and 'URL.500')\n- 'URL'\n- 'URL'\n\nTest splits:\n- 'URL'\n- 'URL'\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n- 'clean.100': 100 hours of training data from the 'clean' subset\n- 'clean.360': 360 hours of training data from the 'clean' subset\n- 'other.500': 500 hours of training data from the 'other' subset",
"## Common Voice\nCommon Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## VoxPopuli\nVoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## TED-LIUM\nTED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## GigaSpeech\nGigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train' ('l' subset of training data (2,500 h))\n- 'validation'\n\nTest splits:\n- 'test'\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n- 'xs': extra-small subset of training data (10 h)\n- 's': small subset of training data (250 h)\n- 'm': medium subset of training data (1,000 h)\n- 'xl': extra-large subset of training data (10,000 h)",
"## SPGISpeech\nSPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.\n\nLoading the dataset requires authorization.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train' ('l' subset of training data (~5,000 h))\n- 'validation'\n\nTest splits:\n- 'test'\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n- 's': small subset of training data (~200 h)\n- 'm': medium subset of training data (~1,000 h)",
"## Earnings-22\nEarnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0. \n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## AMI\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-English #license-cc-by-4.0 #license-apache-2.0 #license-cc0-1.0 #license-cc-by-nc-3.0 #license-other #asr #benchmark #speech #esc #region-us \n",
"## ESC benchmark diagnostic dataset",
"## Dataset Summary\n\nAs a part of ESC benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.\n\nAll eight datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:\n\n\n\nDatasets provided as splits, so to have clean diagnostic subset of AMI:\n\n\n\nSplits are: '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"URL\"', '\"common_voice.clean\"', '\"common_voice.other\"'. \n\nThe datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.",
"## Dataset Information\n\nA data point can be accessed by indexing the dataset object loaded through 'load_dataset':\n\n\n\nA typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:",
"### Data Fields\n\n- 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.\n\n- 'ortho_transcript': the orthographic transcription of the audio file.\n- \n- 'norm_transcript': the normalized transcription of the audio file.\n\n- 'id': unique id of the data sample.",
"### Data Preparation",
"#### Audio\nThe audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.\n\nNote that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.",
"#### Transcriptions\nThe transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.\n\nTranscriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to URL for scoring.",
"### Access\nAll eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:\n* Common Voice: URL\n* GigaSpeech: URL\n* SPGISpeech: URL",
"### Diagnostic Dataset\nESC contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: esc-bench/esc-diagnostic-dataset.",
"## LibriSpeech\n\nThe LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.\n\nExample Usage:\n\n\n\nTrain/validation splits:\n- 'train' (combination of 'URL.100', 'URL.360' and 'URL.500')\n- 'URL'\n- 'URL'\n\nTest splits:\n- 'URL'\n- 'URL'\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n- 'clean.100': 100 hours of training data from the 'clean' subset\n- 'clean.360': 360 hours of training data from the 'clean' subset\n- 'other.500': 500 hours of training data from the 'other' subset",
"## Common Voice\nCommon Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## VoxPopuli\nVoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## TED-LIUM\nTED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## GigaSpeech\nGigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train' ('l' subset of training data (2,500 h))\n- 'validation'\n\nTest splits:\n- 'test'\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n- 'xs': extra-small subset of training data (10 h)\n- 's': small subset of training data (250 h)\n- 'm': medium subset of training data (1,000 h)\n- 'xl': extra-large subset of training data (10,000 h)",
"## SPGISpeech\nSPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.\n\nLoading the dataset requires authorization.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train' ('l' subset of training data (~5,000 h))\n- 'validation'\n\nTest splits:\n- 'test'\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n- 's': small subset of training data (~200 h)\n- 'm': medium subset of training data (~1,000 h)",
"## Earnings-22\nEarnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0. \n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'",
"## AMI\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.\n\nExample usage:\n\n\n\nTraining/validation splits:\n- 'train'\n- 'validation'\n\nTest splits:\n- 'test'"
] |
d7309301bd51eac707cc1e80d7bf4209c2f71365 | # Dataset Card for "punctuation-nilc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tiagoblima/punctuation-nilc-bert | [
"language:pt",
"region:us"
] | 2022-10-16T17:02:29+00:00 | {"language": "pt", "dataset_info": {"features": [{"name": "text_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 1177684.2701598366, "num_examples": 2604}, {"name": "train", "num_bytes": 4224993.504240118, "num_examples": 9371}, {"name": "validation", "num_bytes": 479472.5920696906, "num_examples": 1041}], "download_size": 1802076, "dataset_size": 5882150.366469645}} | 2023-07-19T16:03:29+00:00 | [] | [
"pt"
] | TAGS
#language-Portuguese #region-us
| # Dataset Card for "punctuation-nilc"
More Information needed | [
"# Dataset Card for \"punctuation-nilc\"\n\nMore Information needed"
] | [
"TAGS\n#language-Portuguese #region-us \n",
"# Dataset Card for \"punctuation-nilc\"\n\nMore Information needed"
] |
899b75095a573984a124727dcce8de7e30ad67dc |
# laion2b_multi_korean_subset_with_image
## Dataset Description
- **Download Size** 342 GB
img2dataset을 통해 다운로드에 성공한 [Bingsu/laion2B-multi-korean-subset](https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset) 이미지를 정리한 데이터셋입니다.
이미지는 9,800,137장입니다.
이미지는 짧은 쪽 길이가 256이 되도록 리사이즈 되었으며, 품질 100인 webp파일로 다운로드 되었습니다.
## Usage
### 1. datasets
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2b_multi_korean_subset_with_image", streaming=True, split="train")
>>> dataset.features
{'image': Image(decode=True, id=None),
'text': Value(dtype='string', id=None),
'width': Value(dtype='int32', id=None),
'height': Value(dtype='int32', id=None)}
>>> next(iter(dataset))
{'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=256x256>,
'text': '소닉기어 에어폰5 휴대용 스테레오 블루투스 헤드폰',
'width': 256,
'height': 256}
```
### 2. webdataset
이 데이터셋은 [webdataset](https://github.com/webdataset/webdataset)으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.
!! 아래 방법은 Windows에서는 에러가 발생합니다.
```python
>>> import webdataset as wds
>>> url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar"
>>> dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json")
```
```python
>>> next(iter(dataset))
...
```
이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에([PR #215](https://github.com/webdataset/webdataset/pull/215)), 직접 디코딩해야 합니다.
```python
import io
import webdataset as wds
from PIL import Image
def preprocess(data):
webp, jsn = data
img = Image.open(io.BytesIO(webp))
out = {
"image": img,
"text": jsn["caption"],
"width": jsn["width"],
"height": jsn["height"]
}
return out
url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar"
dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json").map(preprocess)
```
```python
>>> next(iter(dataset))
{'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=427x256>,
'text': '[따블리에]유아동 미술가운, 미술 전신복',
'width': 427,
'height': 256}
```
## Note

각각의 tar 파일은 위 처럼 구성되어 있습니다.
다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.
각각의 json 파일은 다음처럼 되어있습니다.
```json
{
"caption": "\ub514\uc790\uc778 \uc53d\ud0b9\uacfc \ub514\uc9c0\ud138 \ud2b8\ub79c\uc2a4\ud3ec\uba54\uc774\uc158",
"url": "https://image.samsungsds.com/kr/insights/dt1.jpg?queryString=20210915031642",
"key": "014770069",
"status": "success",
"error_message": null,
"width": 649,
"height": 256,
"original_width": 760,
"original_height": 300,
"exif": "{}"
}
```
txt파일은 json파일의 "caption"을 담고 있습니다.
| Bingsu/laion2b_multi_korean_subset_with_image | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|laion/laion2B-multi",
"language:ko",
"license:cc-by-4.0",
"region:us"
] | 2022-10-17T03:32:45+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|laion/laion2B-multi"], "task_categories": ["feature-extraction"], "task_ids": [], "pretty_name": "laion2b multi korean subset with image", "tags": []} | 2022-11-03T05:10:40+00:00 | [] | [
"ko"
] | TAGS
#task_categories-feature-extraction #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|laion/laion2B-multi #language-Korean #license-cc-by-4.0 #region-us
|
# laion2b_multi_korean_subset_with_image
## Dataset Description
- Download Size 342 GB
img2dataset을 통해 다운로드에 성공한 Bingsu/laion2B-multi-korean-subset 이미지를 정리한 데이터셋입니다.
이미지는 9,800,137장입니다.
이미지는 짧은 쪽 길이가 256이 되도록 리사이즈 되었으며, 품질 100인 webp파일로 다운로드 되었습니다.
## Usage
### 1. datasets
### 2. webdataset
이 데이터셋은 webdataset으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.
!! 아래 방법은 Windows에서는 에러가 발생합니다.
이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에(PR #215), 직접 디코딩해야 합니다.
## Note
!tar_image
각각의 tar 파일은 위 처럼 구성되어 있습니다.
다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.
각각의 json 파일은 다음처럼 되어있습니다.
txt파일은 json파일의 "caption"을 담고 있습니다.
| [
"# laion2b_multi_korean_subset_with_image",
"## Dataset Description\n- Download Size 342 GB\n\nimg2dataset을 통해 다운로드에 성공한 Bingsu/laion2B-multi-korean-subset 이미지를 정리한 데이터셋입니다.\n\n이미지는 9,800,137장입니다.\n\n이미지는 짧은 쪽 길이가 256이 되도록 리사이즈 되었으며, 품질 100인 webp파일로 다운로드 되었습니다.",
"## Usage",
"### 1. datasets",
"### 2. webdataset\n\n이 데이터셋은 webdataset으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.\n\n!! 아래 방법은 Windows에서는 에러가 발생합니다.\n\n\n\n\n\n이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에(PR #215), 직접 디코딩해야 합니다.",
"## Note\n\n!tar_image\n각각의 tar 파일은 위 처럼 구성되어 있습니다.\n\n다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.\n\n각각의 json 파일은 다음처럼 되어있습니다.\n\n\n\ntxt파일은 json파일의 \"caption\"을 담고 있습니다."
] | [
"TAGS\n#task_categories-feature-extraction #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|laion/laion2B-multi #language-Korean #license-cc-by-4.0 #region-us \n",
"# laion2b_multi_korean_subset_with_image",
"## Dataset Description\n- Download Size 342 GB\n\nimg2dataset을 통해 다운로드에 성공한 Bingsu/laion2B-multi-korean-subset 이미지를 정리한 데이터셋입니다.\n\n이미지는 9,800,137장입니다.\n\n이미지는 짧은 쪽 길이가 256이 되도록 리사이즈 되었으며, 품질 100인 webp파일로 다운로드 되었습니다.",
"## Usage",
"### 1. datasets",
"### 2. webdataset\n\n이 데이터셋은 webdataset으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.\n\n!! 아래 방법은 Windows에서는 에러가 발생합니다.\n\n\n\n\n\n이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에(PR #215), 직접 디코딩해야 합니다.",
"## Note\n\n!tar_image\n각각의 tar 파일은 위 처럼 구성되어 있습니다.\n\n다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.\n\n각각의 json 파일은 다음처럼 되어있습니다.\n\n\n\ntxt파일은 json파일의 \"caption\"을 담고 있습니다."
] |
86820e0d48153d64153a2f70ace60c1090697f07 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: csarron/bert-base-uncased-squad-v1
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-f76498-1781661804 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-17T04:17:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "csarron/bert-base-uncased-squad-v1", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-17T04:20:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: csarron/bert-base-uncased-squad-v1
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nbroad for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: csarron/bert-base-uncased-squad-v1\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: csarron/bert-base-uncased-squad-v1\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] |
1a2e776f38e29c4e70e3ce299b76b1933b463e60 | # Dataset Card for "Watts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eliwill/Watts | [
"region:us"
] | 2022-10-17T04:50:45+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5040818, "num_examples": 17390}, {"name": "validation", "num_bytes": 99856, "num_examples": 399}], "download_size": 2976066, "dataset_size": 5140674}} | 2022-10-17T04:50:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Watts"
More Information needed | [
"# Dataset Card for \"Watts\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Watts\"\n\nMore Information needed"
] |
e2303edabe49ef87791c19764cb0dbdb39177b1d | test | acurious/testdreambooth | [
"region:us"
] | 2022-10-17T06:59:16+00:00 | {} | 2022-10-17T07:05:53+00:00 | [] | [] | TAGS
#region-us
| test | [] | [
"TAGS\n#region-us \n"
] |
09e8932bb30d97e57848c117429e8f944acd3dfd | # Dataset Card for "lsf_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vvincentt/lsf_dataset | [
"region:us"
] | 2022-10-17T09:04:19+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 1391968, "num_examples": 1400}, {"name": "validation", "num_bytes": 497849, "num_examples": 500}], "download_size": 629433, "dataset_size": 1889817}} | 2022-10-17T09:41:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lsf_dataset"
More Information needed | [
"# Dataset Card for \"lsf_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lsf_dataset\"\n\nMore Information needed"
] |
81eca6cc0ab38a851d62f8fe8632acbd6c12c531 |
# Dataset Card for SloIE
### Dataset Summary
SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the [Slovene Lexical Database]( (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.
For a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below.
### Supported Tasks and Leaderboards
Idiom detection.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```json
{
'sentence': 'Fantje regljajo v enem kotu, deklice pa svoje obrazke barvajo s pisanimi barvami.',
'expression': 'barvati kaj s črnimi barvami',
'word_order': [11, 10, 12, 13, 14],
'sentence_words': ['Fantje', 'regljajo', 'v', 'enem', 'kotu,', 'deklice', 'pa', 'svoje', 'obrazke', 'barvajo', 's', 'pisanimi', 'barvami.'],
'is_idiom': ['*', '*', '*', '*', '*', '*', '*', '*', 'NE', 'NE', 'NE', 'NE', 'NE']
}
```
In this `sentence`, the words of the expression "barvati kaj s črnimi barvami" are used in a literal sense, as indicated by the "NE" annotations inside `is_idiom`. The "*" annotations indicate the words are not part of the expression.
### Data Fields
- `sentence`: raw sentence in string form - **WARNING**: this is at times slightly different from the words inside `sentence_words` (e.g., "..." here could be "." in `sentence_words`);
- `expression`: the annotated idiomatic expression;
- `word_order`: numbers indicating the positions of tokens that belong to the expression;
- `sentence_words`: words in the sentence;
- `is_idiom`: a string denoting whether each word has an idiomatic (`"DA"`), literal (`"NE"`), or ambiguous (`"NEJASEN ZGLED"`) meaning. `"*"` means that the word is not part of the expression.
## Additional Information
### Dataset Curators
Tadej Škvorc, Polona Gantar, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@article{skvorc2022mice,
title = {MICE: Mining Idioms with Contextual Embeddings},
journal = {Knowledge-Based Systems},
volume = {235},
pages = {107606},
year = {2022},
doi = {https://doi.org/10.1016/j.knosys.2021.107606},
url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686},
author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko},
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| cjvt/sloie | [
"task_categories:text-classification",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"language:sl",
"license:cc-by-nc-sa-4.0",
"idiom-detection",
"multiword-expression-detection",
"region:us"
] | 2022-10-17T11:55:41+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "100K<n<1M"], "source_datasets": [], "task_categories": ["text-classification", "token-classification"], "task_ids": [], "pretty_name": "Dataset of Slovene idiomatic expressions SloIE", "tags": ["idiom-detection", "multiword-expression-detection"]} | 2022-10-21T06:36:18+00:00 | [] | [
"sl"
] | TAGS
#task_categories-text-classification #task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-100K<n<1M #language-Slovenian #license-cc-by-nc-sa-4.0 #idiom-detection #multiword-expression-detection #region-us
|
# Dataset Card for SloIE
### Dataset Summary
SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the Slovene Lexical Database. Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.
For a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below.
### Supported Tasks and Leaderboards
Idiom detection.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
In this 'sentence', the words of the expression "barvati kaj s črnimi barvami" are used in a literal sense, as indicated by the "NE" annotations inside 'is_idiom'. The "*" annotations indicate the words are not part of the expression.
### Data Fields
- 'sentence': raw sentence in string form - WARNING: this is at times slightly different from the words inside 'sentence_words' (e.g., "..." here could be "." in 'sentence_words');
- 'expression': the annotated idiomatic expression;
- 'word_order': numbers indicating the positions of tokens that belong to the expression;
- 'sentence_words': words in the sentence;
- 'is_idiom': a string denoting whether each word has an idiomatic ('"DA"'), literal ('"NE"'), or ambiguous ('"NEJASEN ZGLED"') meaning. '"*"' means that the word is not part of the expression.
## Additional Information
### Dataset Curators
Tadej Škvorc, Polona Gantar, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Contributions
Thanks to @matejklemen for adding this dataset.
| [
"# Dataset Card for SloIE",
"### Dataset Summary\n\nSloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the Slovene Lexical Database. Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.\n\nFor a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below.",
"### Supported Tasks and Leaderboards\n\nIdiom detection.",
"### Languages\n\nSlovenian.",
"## Dataset Structure",
"### Data Instances\n\nA sample instance from the dataset:\n\n\nIn this 'sentence', the words of the expression \"barvati kaj s črnimi barvami\" are used in a literal sense, as indicated by the \"NE\" annotations inside 'is_idiom'. The \"*\" annotations indicate the words are not part of the expression.",
"### Data Fields\n\n- 'sentence': raw sentence in string form - WARNING: this is at times slightly different from the words inside 'sentence_words' (e.g., \"...\" here could be \".\" in 'sentence_words'); \n- 'expression': the annotated idiomatic expression; \n- 'word_order': numbers indicating the positions of tokens that belong to the expression; \n- 'sentence_words': words in the sentence; \n- 'is_idiom': a string denoting whether each word has an idiomatic ('\"DA\"'), literal ('\"NE\"'), or ambiguous ('\"NEJASEN ZGLED\"') meaning. '\"*\"' means that the word is not part of the expression.",
"## Additional Information",
"### Dataset Curators\n\nTadej Škvorc, Polona Gantar, Marko Robnik-Šikonja.",
"### Licensing Information\n\nCC BY-NC-SA 4.0.",
"### Contributions\n\nThanks to @matejklemen for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-100K<n<1M #language-Slovenian #license-cc-by-nc-sa-4.0 #idiom-detection #multiword-expression-detection #region-us \n",
"# Dataset Card for SloIE",
"### Dataset Summary\n\nSloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the Slovene Lexical Database. Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.\n\nFor a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below.",
"### Supported Tasks and Leaderboards\n\nIdiom detection.",
"### Languages\n\nSlovenian.",
"## Dataset Structure",
"### Data Instances\n\nA sample instance from the dataset:\n\n\nIn this 'sentence', the words of the expression \"barvati kaj s črnimi barvami\" are used in a literal sense, as indicated by the \"NE\" annotations inside 'is_idiom'. The \"*\" annotations indicate the words are not part of the expression.",
"### Data Fields\n\n- 'sentence': raw sentence in string form - WARNING: this is at times slightly different from the words inside 'sentence_words' (e.g., \"...\" here could be \".\" in 'sentence_words'); \n- 'expression': the annotated idiomatic expression; \n- 'word_order': numbers indicating the positions of tokens that belong to the expression; \n- 'sentence_words': words in the sentence; \n- 'is_idiom': a string denoting whether each word has an idiomatic ('\"DA\"'), literal ('\"NE\"'), or ambiguous ('\"NEJASEN ZGLED\"') meaning. '\"*\"' means that the word is not part of the expression.",
"## Additional Information",
"### Dataset Curators\n\nTadej Škvorc, Polona Gantar, Marko Robnik-Šikonja.",
"### Licensing Information\n\nCC BY-NC-SA 4.0.",
"### Contributions\n\nThanks to @matejklemen for adding this dataset."
] |
42237ba9cdc8ce88397b1874e73925abba4f338a | # Sample
| sfujiwara/sample | [
"region:us"
] | 2022-10-17T12:20:15+00:00 | {} | 2022-10-18T20:27:18+00:00 | [] | [] | TAGS
#region-us
| # Sample
| [
"# Sample"
] | [
"TAGS\n#region-us \n",
"# Sample"
] |
850a76c5c794ae87d5e4a15665b3de5bd2e61d95 | #@markdown Add here the URLs to the images of the concept you are adding. 3-5 should be fine
urls = [
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3228-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3228-01_512_02.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512_02.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3870-01-edit-02_crop_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4520_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4589-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4622-01-crop_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/ScanImage066_crop_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4589-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3348-01_512.png",
] | Nyckelpiga/images | [
"license:other",
"region:us"
] | 2022-10-17T13:49:35+00:00 | {"license": "other"} | 2022-10-17T16:19:59+00:00 | [] | [] | TAGS
#license-other #region-us
| #@markdown Add here the URLs to the images of the concept you are adding. 3-5 should be fine
urls = [
"URL
"URL
"URL
"URL
"URL
"URL
"URL
"URL
"URL
"URL
"URL
"URL
] | [] | [
"TAGS\n#license-other #region-us \n"
] |
d6c91cf96df74ce879bb4e8837f4a59a8e7341f0 | # Dataset Card for "error_correction_model_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/error_correction_model_dataset_raw | [
"region:us"
] | 2022-10-17T13:52:04+00:00 | {"dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 706774777.602151, "num_examples": 4141927}], "download_size": 301173004, "dataset_size": 706774777.602151}} | 2022-10-17T13:54:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "error_correction_model_dataset"
More Information needed | [
"# Dataset Card for \"error_correction_model_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"error_correction_model_dataset\"\n\nMore Information needed"
] |
69b23927e6d5f4d09321dad6df33479be3ddee12 |
A dataset that has NBA data as well as social media data including twitter and wikipedia
| noahgift/social-power-nba | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-17T15:03:02+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2022-10-17T15:07:45+00:00 | [] | [] | TAGS
#license-cc-by-nc-nd-4.0 #region-us
|
A dataset that has NBA data as well as social media data including twitter and wikipedia
| [] | [
"TAGS\n#license-cc-by-nc-nd-4.0 #region-us \n"
] |
ac8e5493caf159f4f717379cc7f434ad3c52e2f6 | Source: [https://github.com/google-research/mnist-c](https://github.com/google-research/mnist-c)
# MNIST-C
This repository contains the source code used to create the MNIST-C dataset, a
corrupted MNIST benchmark for testing out-of-distribution robustness of computer
vision models.
Please see our full paper [https://arxiv.org/abs/1906.02337](https://arxiv.org/abs/1906.02337) for more details.
## Dataset
The static dataset is available for download at [https://zenodo.org/record/3239543](https://zenodo.org/record/3239543). | barkermrl/mnist-c | [
"license:apache-2.0",
"arxiv:1906.02337",
"region:us"
] | 2022-10-17T15:26:58+00:00 | {"license": "apache-2.0"} | 2022-10-17T16:28:57+00:00 | [
"1906.02337"
] | [] | TAGS
#license-apache-2.0 #arxiv-1906.02337 #region-us
| Source: URL
# MNIST-C
This repository contains the source code used to create the MNIST-C dataset, a
corrupted MNIST benchmark for testing out-of-distribution robustness of computer
vision models.
Please see our full paper URL for more details.
## Dataset
The static dataset is available for download at URL | [
"# MNIST-C\n\nThis repository contains the source code used to create the MNIST-C dataset, a\ncorrupted MNIST benchmark for testing out-of-distribution robustness of computer\nvision models.\n\nPlease see our full paper URL for more details.",
"## Dataset\n\nThe static dataset is available for download at URL"
] | [
"TAGS\n#license-apache-2.0 #arxiv-1906.02337 #region-us \n",
"# MNIST-C\n\nThis repository contains the source code used to create the MNIST-C dataset, a\ncorrupted MNIST benchmark for testing out-of-distribution robustness of computer\nvision models.\n\nPlease see our full paper URL for more details.",
"## Dataset\n\nThe static dataset is available for download at URL"
] |
ddf8d4553b13ae4aae9c5218e53851e38d721096 | # AutoTrain Dataset for project: ethos-sentiments
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ethos-sentiments.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "well it's clear now why europeans can't differ niggers and shitskins from human",
"target": 0
},
{
"text": "These boys will then grow up with people making fun of them and they will then hate their parents for ruining their lives.",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['hate_speech', 'no_hate_speech'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 798 |
| valid | 200 |
| pachi107/autotrain-data-ethos-sentiments | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-10-17T15:28:10+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-17T15:28:44+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoTrain Dataset for project: ethos-sentiments
===============================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project ethos-sentiments.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
a4ed436fa9208153c41e658f5666b9be05e00456 | MedNorm2SnomedCT2UMLS
Paper on Mednorm and harmonisation: https://aclanthology.org/W19-3204.pdf
The medical concept normalisation task aims to map textual descriptions to standard terminologies such as SNOMED-CT or MedDRA.
Existing publicly available datasets annotated using different terminologies cannot be simply merged and utilised, and therefore become less
valuable when developing machine learningbased concept normalisation systems.
To address that, we designed a data harmonisation pipeline and engineered a corpus of 27,979 textual descriptions simultaneously mapped to both MedDRA and SNOMED-CT,
sourced from five publicly available datasets across biomedical and social media domains.
| awacke1/MedNorm2SnomedCT2UMLS | [
"license:mit",
"region:us"
] | 2022-10-17T17:17:16+00:00 | {"license": "mit"} | 2023-01-05T14:05:26+00:00 | [] | [] | TAGS
#license-mit #region-us
| MedNorm2SnomedCT2UMLS
Paper on Mednorm and harmonisation: URL
The medical concept normalisation task aims to map textual descriptions to standard terminologies such as SNOMED-CT or MedDRA.
Existing publicly available datasets annotated using different terminologies cannot be simply merged and utilised, and therefore become less
valuable when developing machine learningbased concept normalisation systems.
To address that, we designed a data harmonisation pipeline and engineered a corpus of 27,979 textual descriptions simultaneously mapped to both MedDRA and SNOMED-CT,
sourced from five publicly available datasets across biomedical and social media domains.
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
b8cf42f0d1cf99a04313dd8d7d77bb0fb1d42a19 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex1
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__ex1-all-65db7c-1796062129 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-18T05:38:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/ex1"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/ex1", "dataset_config": "all", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-18T06:27:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex1
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @phpthinh for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: phpthinh/ex1\n* Config: all\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @phpthinh for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: phpthinh/ex1\n* Config: all\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @phpthinh for evaluating this model."
] |
7969c200d7b0eec370bce6870897ef06d678248e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex2
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__ex2-all-93c06b-1796162130 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-18T05:39:26+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/ex2"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/ex2", "dataset_config": "all", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-18T06:26:43+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex2
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @phpthinh for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: phpthinh/ex2\n* Config: all\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @phpthinh for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: phpthinh/ex2\n* Config: all\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @phpthinh for evaluating this model."
] |
40e52ccedb5de97ee785848924f08544f3ca0969 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Emanuel/bertweet-emotion-base
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nayan06](https://huggingface.co/nayan06) for evaluating this model. | autoevaluate/autoeval-eval-emotion-default-1b690b-1797662163 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-18T05:54:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Emanuel/bertweet-emotion-base", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-18T05:55:22+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Emanuel/bertweet-emotion-base
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nayan06 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Emanuel/bertweet-emotion-base\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nayan06 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Emanuel/bertweet-emotion-base\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nayan06 for evaluating this model."
] |
645a6fcac4e0a587b428e545ccd8d68d71d847b3 | pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
| Makokokoko/aaaaa | [
"region:us"
] | 2022-10-18T06:37:03+00:00 | {} | 2022-10-18T06:38:02+00:00 | [] | [] | TAGS
#region-us
| pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
| [] | [
"TAGS\n#region-us \n"
] |
eee375a1b72660d3bd2c5b468d18615279f6d992 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex3
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. | autoevaluate/autoeval-eval-phpthinh__ex3-all-630c04-1799362235 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-18T07:18:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["phpthinh/ex3"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "phpthinh/ex3", "dataset_config": "all", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-18T08:07:43+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: phpthinh/ex3
* Config: all
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @phpthinh for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: phpthinh/ex3\n* Config: all\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @phpthinh for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: phpthinh/ex3\n* Config: all\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @phpthinh for evaluating this model."
] |
b45637b6ba33e8c7709e20385ac1c8dbc0cdec1f | # Dataset Card for Pokémon type captions
Contains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex).
Each Pokémon is represented once by the default form from [PokéAPI](https://pokeapi.co/)
Each row contains `image` and `text` keys:
- `image` is a 475x475 PIL jpg of the Pokémon's official artwork.
- `text` is a label describing the Pokémon by its type(s)
## Attributions
_Images and typing information pulled from [PokéAPI](https://pokeapi.co/)_
_Based on the [Lambda Labs Pokémon Blip Captions Dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)_
| GabeHD/pokemon-type-captions | [
"region:us"
] | 2022-10-18T07:38:18+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19372532.0, "num_examples": 898}], "download_size": 0, "dataset_size": 19372532.0}} | 2022-10-23T03:40:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for Pokémon type captions
Contains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex).
Each Pokémon is represented once by the default form from PokéAPI
Each row contains 'image' and 'text' keys:
- 'image' is a 475x475 PIL jpg of the Pokémon's official artwork.
- 'text' is a label describing the Pokémon by its type(s)
## Attributions
_Images and typing information pulled from PokéAPI_
_Based on the Lambda Labs Pokémon Blip Captions Dataset_
| [
"# Dataset Card for Pokémon type captions\n\nContains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex). \nEach Pokémon is represented once by the default form from PokéAPI\n\nEach row contains 'image' and 'text' keys:\n- 'image' is a 475x475 PIL jpg of the Pokémon's official artwork. \n- 'text' is a label describing the Pokémon by its type(s)",
"## Attributions\n\n_Images and typing information pulled from PokéAPI_\n\n_Based on the Lambda Labs Pokémon Blip Captions Dataset_"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Pokémon type captions\n\nContains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex). \nEach Pokémon is represented once by the default form from PokéAPI\n\nEach row contains 'image' and 'text' keys:\n- 'image' is a 475x475 PIL jpg of the Pokémon's official artwork. \n- 'text' is a label describing the Pokémon by its type(s)",
"## Attributions\n\n_Images and typing information pulled from PokéAPI_\n\n_Based on the Lambda Labs Pokémon Blip Captions Dataset_"
] |
a9ce511fb3bfa4898bd7dcebe6f23e08211243a8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: 0ys/mt5-small-finetuned-amazon-en-es
* Dataset: conceptual_captions
* Config: unlabeled
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@DonaldDaz](https://huggingface.co/DonaldDaz) for evaluating this model. | autoevaluate/autoeval-eval-conceptual_captions-unlabeled-ccbde0-1800162251 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-18T08:04:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conceptual_captions"], "eval_info": {"task": "summarization", "model": "0ys/mt5-small-finetuned-amazon-en-es", "metrics": ["accuracy"], "dataset_name": "conceptual_captions", "dataset_config": "unlabeled", "dataset_split": "train", "col_mapping": {"text": "image_url", "target": "caption"}}} | 2022-10-18T22:14:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: 0ys/mt5-small-finetuned-amazon-en-es
* Dataset: conceptual_captions
* Config: unlabeled
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @DonaldDaz for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: 0ys/mt5-small-finetuned-amazon-en-es\n* Dataset: conceptual_captions\n* Config: unlabeled\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @DonaldDaz for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: 0ys/mt5-small-finetuned-amazon-en-es\n* Dataset: conceptual_captions\n* Config: unlabeled\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @DonaldDaz for evaluating this model."
] |
3d0d0a2113f3a35a0163f68d96c6307d641f1a5a | # Dataset Card for TED descriptions
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gigant/ted_descriptions | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-18T09:24:43+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "TED descriptions", "tags": [], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "descr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2617778, "num_examples": 5705}], "download_size": 1672988, "dataset_size": 2617778}} | 2022-10-18T10:16:29+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
| # Dataset Card for TED descriptions
More Information needed | [
"# Dataset Card for TED descriptions\n\n\n\nMore Information needed"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for TED descriptions\n\n\n\nMore Information needed"
] |
9d7a3960c7b4b1f6efb1e97bd4d469a217b46930 |
# Dataset Card for "German LER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/elenanereiss/Legal-Entity-Recognition](https://github.com/elenanereiss/Legal-Entity-Recognition)
- **Paper:** [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf)
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the `BIO` tagging scheme.
The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes (`ner_tags`) and another one with a set of 7 coarse-grained classes (`ner_coarse_tags`). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).

For more details see [https://arxiv.org/pdf/2003.13016v1.pdf](https://arxiv.org/pdf/2003.13016v1.pdf).
### Supported Tasks and Leaderboards
- **Tasks:** Named Entity Recognition
- **Leaderboards:**
### Languages
German
## Dataset Structure
### Data Instances
```python
{
'id': '1',
'tokens': ['Eine', 'solchermaßen', 'verzögerte', 'oder', 'bewusst', 'eingesetzte', 'Verkettung', 'sachgrundloser', 'Befristungen', 'schließt', '§', '14', 'Abs.', '2', 'Satz', '2', 'TzBfG', 'aus', '.'],
'ner_tags': [38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 3, 22, 22, 22, 22, 22, 22, 38, 38],
'ner_coarse_tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 9, 9, 9, 9, 9, 9, 14, 14]
}
```
### Data Fields
```python
{
'id': Value(dtype='string', id=None),
'tokens': Sequence(feature=Value(dtype='string', id=None),
length=-1, id=None),
'ner_tags': Sequence(feature=ClassLabel(num_classes=39,
names=['B-AN',
'B-EUN',
'B-GRT',
'B-GS',
'B-INN',
'B-LD',
'B-LDS',
'B-LIT',
'B-MRK',
'B-ORG',
'B-PER',
'B-RR',
'B-RS',
'B-ST',
'B-STR',
'B-UN',
'B-VO',
'B-VS',
'B-VT',
'I-AN',
'I-EUN',
'I-GRT',
'I-GS',
'I-INN',
'I-LD',
'I-LDS',
'I-LIT',
'I-MRK',
'I-ORG',
'I-PER',
'I-RR',
'I-RS',
'I-ST',
'I-STR',
'I-UN',
'I-VO',
'I-VS',
'I-VT',
'O'],
id=None),
length=-1,
id=None),
'ner_coarse_tags': Sequence(feature=ClassLabel(num_classes=15,
names=['B-LIT',
'B-LOC',
'B-NRM',
'B-ORG',
'B-PER',
'B-REG',
'B-RS',
'I-LIT',
'I-LOC',
'I-NRM',
'I-ORG',
'I-PER',
'I-REG',
'I-RS',
'O'],
id=None),
length=-1,
id=None)
}
```
### Data Splits
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | 53384 | 6666 | 6673 |
## Dataset Creation
### Curation Rationale
Documents in the legal domain contain multiple references to named entities, especially domain-specific named entities, i. e., jurisdictions, legal institutions, etc. Legal documents are unique and differ greatly from newspaper texts. On the one hand, the occurrence of general-domain named entities is relatively rare. On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.). Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.
### Source Data
Court decisions from 2017 and 2018 were selected for the dataset, published online by the [Federal Ministry of Justice and Consumer Protection](http://www.rechtsprechung-im-internet.de). The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
#### Initial Data Collection and Normalization
From the table of [contents](http://www.rechtsprechung-im-internet.de/rii-toc.xml), 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements `Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel`. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.
The extracted data was split into sentences, tokenised using [SoMaJo](https://github.com/tsproisl/SoMaJo) and manually annotated in [WebAnno](https://webanno.github.io/webanno/).
#### Who are the source language producers?
The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.
### Annotations
#### Annotation process
For more details see [annotation guidelines](https://github.com/elenanereiss/Legal-Entity-Recognition/blob/master/docs/Annotationsrichtlinien.pdf) (in German).
<!-- #### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Personal and Sensitive Information
A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)-->
### Licensing Information
[CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2003.13016,
doi = {10.48550/ARXIV.2003.13016},
url = {https://arxiv.org/abs/2003.13016},
author = {Leitner, Elena and Rehm, Georg and Moreno-Schneider, Julián},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A Dataset of German Legal Documents for Named Entity Recognition},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
| elenanereiss/german-ler | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"ner, named entity recognition, legal ner, legal texts, label classification",
"arxiv:2003.13016",
"doi:10.57967/hf/0046",
"region:us"
] | 2022-10-18T10:10:32+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["de"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "dataset-of-legal-documents", "pretty_name": "German Named Entity Recognition in Legal Documents", "tags": ["ner, named entity recognition, legal ner, legal texts, label classification"], "train-eval-index": [{"config": "conll2003", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}}]} | 2022-10-26T07:32:17+00:00 | [
"2003.13016"
] | [
"de"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-German #license-cc-by-4.0 #ner, named entity recognition, legal ner, legal texts, label classification #arxiv-2003.13016 #doi-10.57967/hf/0046 #region-us
| Dataset Card for "German LER"
=============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: URL
* Point of Contact: elena.leitner@URL
### Dataset Summary
A dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the 'BIO' tagging scheme.
The dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes ('ner\_tags') and another one with a set of 7 coarse-grained classes ('ner\_coarse\_tags'). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).
. Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.
### Source Data
Court decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
#### Initial Data Collection and Normalization
From the table of contents, 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements 'Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel'. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.
The extracted data was split into sentences, tokenised using SoMaJo and manually annotated in WebAnno.
#### Who are the source language producers?
The Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.
### Annotations
#### Annotation process
For more details see annotation guidelines (in German).
### Personal and Sensitive Information
A fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.
### Licensing Information
CC BY-SA 4.0 license
### Contributions
| [
"### Dataset Summary\n\n\nA dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the 'BIO' tagging scheme.\n\n\nThe dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes ('ner\\_tags') and another one with a set of 7 coarse-grained classes ('ner\\_coarse\\_tags'). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).\n\n\n. Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.",
"### Source Data\n\n\nCourt decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).",
"#### Initial Data Collection and Normalization\n\n\nFrom the table of contents, 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements 'Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel'. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.\n\n\nThe extracted data was split into sentences, tokenised using SoMaJo and manually annotated in WebAnno.",
"#### Who are the source language producers?\n\n\nThe Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.",
"### Annotations",
"#### Annotation process\n\n\nFor more details see annotation guidelines (in German).",
"### Personal and Sensitive Information\n\n\nA fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.",
"### Licensing Information\n\n\nCC BY-SA 4.0 license",
"### Contributions"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-German #license-cc-by-4.0 #ner, named entity recognition, legal ner, legal texts, label classification #arxiv-2003.13016 #doi-10.57967/hf/0046 #region-us \n",
"### Dataset Summary\n\n\nA dataset of Legal Documents from German federal court decisions for Named Entity Recognition. The dataset is human-annotated with 19 fine-grained entity classes. The dataset consists of approx. 67,000 sentences and contains 54,000 annotated entities. NER tags use the 'BIO' tagging scheme.\n\n\nThe dataset includes two different versions of annotations, one with a set of 19 fine-grained semantic classes ('ner\\_tags') and another one with a set of 7 coarse-grained classes ('ner\\_coarse\\_tags'). There are 53,632 annotated entities in total, the majority of which (74.34 %) are legal entities, the others are person, location and organization (25.66 %).\n\n\n. Most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents. Accordingly, there is a great need for an NER-annotated dataset consisting of legal documents, including the corresponding development of a typology of semantic concepts and uniform annotation guidelines.",
"### Source Data\n\n\nCourt decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).",
"#### Initial Data Collection and Normalization\n\n\nFrom the table of contents, 107 documents from each court were selected (see Table 1). The data was collected from the XML documents, i. e., it was extracted from the XML elements 'Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel'. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed.\n\n\nThe extracted data was split into sentences, tokenised using SoMaJo and manually annotated in WebAnno.",
"#### Who are the source language producers?\n\n\nThe Federal Ministry of Justice and the Federal Office of Justice provide selected decisions. Court decisions were produced by humans.",
"### Annotations",
"#### Annotation process\n\n\nFor more details see annotation guidelines (in German).",
"### Personal and Sensitive Information\n\n\nA fundamental characteristic of the published decisions is that all personal information have been anonymised for privacy reasons. This affects the classes person, location and organization.",
"### Licensing Information\n\n\nCC BY-SA 4.0 license",
"### Contributions"
] |
e11292434fb9e789f80041723bdf2e61cfdc7e0b |
SRL annotated corpora for extracting experiencer and cause of emotions | Maxstan/srl_for_emotions_russian | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-18T10:46:43+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-18T11:39:32+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
SRL annotated corpora for extracting experiencer and cause of emotions | [] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n"
] |
1e7a79e5c2bcd57dc6324cb159732771229bc89a |
The data contains comments from political and nonpolitical Russian-speaking YouTube channels.
Date interval: 1 year between April 30, 2020, and April 30, 2021 | Maxstan/russian_youtube_comments_political_and_nonpolitical | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-18T11:40:44+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-18T11:57:07+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
The data contains comments from political and nonpolitical Russian-speaking YouTube channels.
Date interval: 1 year between April 30, 2020, and April 30, 2021 | [] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n"
] |
3cc76d8c9536209e9772338d9567b8ae2a767d79 | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: squad
pretty_name: squad-ja
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
train-eval-index:
- col_mapping:
answers:
answer_start: answer_start
text: text
context: context
question: question
config: squad_v2
metrics:
- name: SQuAD v2
type: squad_v2
splits:
eval_split: validation
train_split: train
task: question-answering
task_id: extractive_question_answering
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Google翻訳APIで翻訳した日本語版SQuAD2.0
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Japanese
## Dataset Structure
### Data Instances
```
{
"start": 43,
"end": 88,
"question": "ビヨンセ は いつ から 人気 を 博し 始め ました か ?",
"context": "BeyoncéGiselleKnowles - Carter ( /b i ː ˈ j ɒ nse ɪ / bee - YON - say ) ( 1981 年 9 月 4 日 生まれ ) は 、 アメリカ の シンガー 、 ソング ライター 、 レコード プロデューサー 、 女優 です 。 テキサス 州 ヒューストン で 生まれ育った 彼女 は 、 子供 の 頃 に さまざまな 歌 と 踊り の コンテスト に 出演 し 、 1990 年 代 後半 に R & B ガールグループ Destiny & 39 ; sChild の リード シンガー と して 名声 を 博し ました 。 父親 の マシューノウルズ が 管理 する この グループ は 、 世界 で 最も 売れて いる 少女 グループ の 1 つ に なり ました 。 彼 ら の 休み は ビヨンセ の デビュー アルバム 、 DangerouslyinLove ( 2003 ) の リリース を 見 ました 。 彼女 は 世界 中 で ソロ アーティスト と して 確立 し 、 5 つ の グラミー 賞 を 獲得 し 、 ビル ボード ホット 100 ナンバーワン シングル 「 CrazyinLove 」 と 「 BabyBoy 」 を フィーチャー し ました 。",
"id": "56be85543aeaaa14008c9063"
}
```
### Data Fields
- start
- end
- question
- context
- id
### Data Splits
- train 86820
- valid 5927
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | Kentaline/hf-dataset-study | [
"license:other",
"region:us"
] | 2022-10-18T12:49:15+00:00 | {"license": "other"} | 2022-10-18T13:35:42+00:00 | [] | [] | TAGS
#license-other #region-us
| ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: squad
pretty_name: squad-ja
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
train-eval-index:
- col_mapping:
answers:
answer_start: answer_start
text: text
context: context
question: question
config: squad_v2
metrics:
- name: SQuAD v2
type: squad_v2
splits:
eval_split: validation
train_split: train
task: question-answering
task_id: extractive_question_answering
---
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Google翻訳APIで翻訳した日本語版SQuAD2.0
### Supported Tasks and Leaderboards
### Languages
Japanese
## Dataset Structure
### Data Instances
### Data Fields
- start
- end
- question
- context
- id
### Data Splits
- train 86820
- valid 5927
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\nGoogle翻訳APIで翻訳した日本語版SQuAD2.0",
"### Supported Tasks and Leaderboards",
"### Languages\nJapanese",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n- start\n- end\n- question\n- context\n- id",
"### Data Splits\n\n- train 86820\n- valid 5927",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#license-other #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\nGoogle翻訳APIで翻訳した日本語版SQuAD2.0",
"### Supported Tasks and Leaderboards",
"### Languages\nJapanese",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n- start\n- end\n- question\n- context\n- id",
"### Data Splits\n\n- train 86820\n- valid 5927",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
bd9407e7063dce3c2df673ed6c088cadd17fb268 | # event classifcaiton dataset | fanxiaonan/event_classification | [
"license:mit",
"region:us"
] | 2022-10-18T13:05:44+00:00 | {"license": "mit"} | 2022-10-18T13:10:19+00:00 | [] | [] | TAGS
#license-mit #region-us
| # event classifcaiton dataset | [
"# event classifcaiton dataset"
] | [
"TAGS\n#license-mit #region-us \n",
"# event classifcaiton dataset"
] |
23ca06a50ba8b4860567c1112ba142820d223743 | # SimKoR
We provide korean sentence text similarity pair dataset using sentiment analysis corpus from [bab2min/corpus](https://github.com/bab2min/corpus).
This data crawling korean review from naver shopping website. we reconstruct subset of dataset to make our dataset.
## Dataset description
The original dataset description can be found at the link [[here]](https://github.com/bab2min/corpus/tree/master/sentiment).

In korean Contrastive Learning, There are few suitable validation dataset (only KorNLI). To create contrastive learning validation dataset, we changed original sentiment analysis dataset to sentence text similar dataset. Our simkor dataset was created by grouping pair of sentence. Each score [0,1,2,4,5] means how far the meaning is between sentences.
## Data Distribution
Our dataset class consist of text similarity score [0, 1,2,4,5]. each score consists of data of the same size.
<table>
<tr><th>Score</th><th>train</th><th>valid</th><th>test</th></tr>
<tr><th>5</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>4</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>2</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>1</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>0</th><th>4,000</th><th>1,000</th><th>1,000</th></tr>
<tr><th>All</th><th>20,000</th><th>5,000</th><th>5,000</th></tr>
</table>
## Example
```
text1 text2 label
고속충전이 안됨ㅠㅠ 집에매연냄새없앨려했는데 그냥창문여는게더 공기가좋네요 5
적당히 맵고 괜찮네요 어제 시킨게 벌써 왔어요 ㅎㅎ 배송빠르고 품질양호합니다 4
다 괜찮은데 배송이 10일이나 걸린게 많이 아쉽네요. 선반 설치하고 나니 주방 베란다 완전 다시 태어났어요~ 2
가격 싸지만 쿠션이 약해 무릎 아파요~ 반품하려구요~ 튼튼하고 빨래도 많이 걸 수 있고 잘쓰고 있어요 1
각인이 찌그저져있고 엉성합니다. 처음 해보는 방탈출이었는데 너무 재미있었어요. 0
```
## Contributors
The main contributors of the work are :
- [Jaemin Kim](https://github.com/kimfunn)\*
- [Yohan Na](https://github.com/nayohan)\*
- [Kangmin Kim](https://github.com/Gangsss)
- [Sangrak Lee](https://github.com/PangRAK)
\*: Equal Contribution
Hanyang University Data Intelligence Lab[(DILAB)](http://dilab.hanyang.ac.kr/) providing support ❤️
## Github
- **Repository :** [SimKoR](https://github.com/nayohan/SimKoR)
## License
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>. | DILAB-HYU/SimKoR | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-18T13:51:49+00:00 | {"license": "cc-by-4.0"} | 2022-10-18T16:27:05+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| SimKoR
======
We provide korean sentence text similarity pair dataset using sentiment analysis corpus from bab2min/corpus.
This data crawling korean review from naver shopping website. we reconstruct subset of dataset to make our dataset.
Dataset description
-------------------
The original dataset description can be found at the link [[here]](URL
!그림6
In korean Contrastive Learning, There are few suitable validation dataset (only KorNLI). To create contrastive learning validation dataset, we changed original sentiment analysis dataset to sentence text similar dataset. Our simkor dataset was created by grouping pair of sentence. Each score [0,1,2,4,5] means how far the meaning is between sentences.
Data Distribution
-----------------
Our dataset class consist of text similarity score [0, 1,2,4,5]. each score consists of data of the same size.
Example
-------
Contributors
------------
The main contributors of the work are :
* Jaemin Kim\*
* Yohan Na\*
* Kangmin Kim
* Sangrak Lee
\*: Equal Contribution
Hanyang University Data Intelligence Lab(DILAB) providing support ️
Github
------
* Repository : SimKoR
License
-------
<a rel="license" href="URL alt="Creative Commons License" style="border-width:0" src="https://i.URL />This work is licensed under a <a rel="license" href="URL Commons Attribution-ShareAlike 4.0 International License.
| [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] |
e44a22761da9acdd42b4181f33aac27a95436824 | # Dataset Card for "stackoverflow-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mrm8488/stackoverflow-ner | [
"region:us"
] | 2022-10-18T13:55:02+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 680079, "num_examples": 3108}, {"name": "train", "num_bytes": 2034117, "num_examples": 9263}, {"name": "validation", "num_bytes": 640935, "num_examples": 2936}], "download_size": 692070, "dataset_size": 3355131}} | 2022-10-18T13:55:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "stackoverflow-ner"
More Information needed | [
"# Dataset Card for \"stackoverflow-ner\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"stackoverflow-ner\"\n\nMore Information needed"
] |
6ae613ba744ee56f645aee9c577d04f7e3d48b30 | # M2D2: A Massively Multi-domain Language Modeling Dataset
*From the paper "[M2D2: A Massively Multi-domain Language Modeling Dataset](https://arxiv.org/abs/2210.07370)", (Reid et al., EMNLP 2022)*
Load the dataset as follows:
```python
import datasets
dataset = datasets.load_dataset("machelreid/m2d2", "cs.CL") # replace cs.CL with the domain of your choice
print(dataset['train'][0]['text'])
```
## Domains
- Culture_and_the_arts
- Culture_and_the_arts__Culture_and_Humanities
- Culture_and_the_arts__Games_and_Toys
- Culture_and_the_arts__Mass_media
- Culture_and_the_arts__Performing_arts
- Culture_and_the_arts__Sports_and_Recreation
- Culture_and_the_arts__The_arts_and_Entertainment
- Culture_and_the_arts__Visual_arts
- General_referece
- General_referece__Further_research_tools_and_topics
- General_referece__Reference_works
- Health_and_fitness
- Health_and_fitness__Exercise
- Health_and_fitness__Health_science
- Health_and_fitness__Human_medicine
- Health_and_fitness__Nutrition
- Health_and_fitness__Public_health
- Health_and_fitness__Self_care
- History_and_events
- History_and_events__By_continent
- History_and_events__By_period
- History_and_events__By_region
- Human_activites
- Human_activites__Human_activities
- Human_activites__Impact_of_human_activity
- Mathematics_and_logic
- Mathematics_and_logic__Fields_of_mathematics
- Mathematics_and_logic__Logic
- Mathematics_and_logic__Mathematics
- Natural_and_physical_sciences
- Natural_and_physical_sciences__Biology
- Natural_and_physical_sciences__Earth_sciences
- Natural_and_physical_sciences__Nature
- Natural_and_physical_sciences__Physical_sciences
- Philosophy
- Philosophy_and_thinking
- Philosophy_and_thinking__Philosophy
- Philosophy_and_thinking__Thinking
- Religion_and_belief_systems
- Religion_and_belief_systems__Allah
- Religion_and_belief_systems__Belief_systems
- Religion_and_belief_systems__Major_beliefs_of_the_world
- Society_and_social_sciences
- Society_and_social_sciences__Social_sciences
- Society_and_social_sciences__Society
- Technology_and_applied_sciences
- Technology_and_applied_sciences__Agriculture
- Technology_and_applied_sciences__Computing
- Technology_and_applied_sciences__Engineering
- Technology_and_applied_sciences__Transport
- alg-geom
- ao-sci
- astro-ph
- astro-ph.CO
- astro-ph.EP
- astro-ph.GA
- astro-ph.HE
- astro-ph.IM
- astro-ph.SR
- astro-ph_l1
- atom-ph
- bayes-an
- chao-dyn
- chem-ph
- cmp-lg
- comp-gas
- cond-mat
- cond-mat.dis-nn
- cond-mat.mes-hall
- cond-mat.mtrl-sci
- cond-mat.other
- cond-mat.quant-gas
- cond-mat.soft
- cond-mat.stat-mech
- cond-mat.str-el
- cond-mat.supr-con
- cond-mat_l1
- cs.AI
- cs.AR
- cs.CC
- cs.CE
- cs.CG
- cs.CL
- cs.CR
- cs.CV
- cs.CY
- cs.DB
- cs.DC
- cs.DL
- cs.DM
- cs.DS
- cs.ET
- cs.FL
- cs.GL
- cs.GR
- cs.GT
- cs.HC
- cs.IR
- cs.IT
- cs.LG
- cs.LO
- cs.MA
- cs.MM
- cs.MS
- cs.NA
- cs.NE
- cs.NI
- cs.OH
- cs.OS
- cs.PF
- cs.PL
- cs.RO
- cs.SC
- cs.SD
- cs.SE
- cs.SI
- cs.SY
- cs_l1
- dg-ga
- econ.EM
- econ.GN
- econ.TH
- econ_l1
- eess.AS
- eess.IV
- eess.SP
- eess.SY
- eess_l1
- eval_sets
- funct-an
- gr-qc
- hep-ex
- hep-lat
- hep-ph
- hep-th
- math-ph
- math.AC
- math.AG
- math.AP
- math.AT
- math.CA
- math.CO
- math.CT
- math.CV
- math.DG
- math.DS
- math.FA
- math.GM
- math.GN
- math.GR
- math.GT
- math.HO
- math.IT
- math.KT
- math.LO
- math.MG
- math.MP
- math.NA
- math.NT
- math.OA
- math.OC
- math.PR
- math.QA
- math.RA
- math.RT
- math.SG
- math.SP
- math.ST
- math_l1
- mtrl-th
- nlin.AO
- nlin.CD
- nlin.CG
- nlin.PS
- nlin.SI
- nlin_l1
- nucl-ex
- nucl-th
- patt-sol
- physics.acc-ph
- physics.ao-ph
- physics.app-ph
- physics.atm-clus
- physics.atom-ph
- physics.bio-ph
- physics.chem-ph
- physics.class-ph
- physics.comp-ph
- physics.data-an
- physics.ed-ph
- physics.flu-dyn
- physics.gen-ph
- physics.geo-ph
- physics.hist-ph
- physics.ins-det
- physics.med-ph
- physics.optics
- physics.plasm-ph
- physics.pop-ph
- physics.soc-ph
- physics.space-ph
- physics_l1
- plasm-ph
- q-alg
- q-bio
- q-bio.BM
- q-bio.CB
- q-bio.GN
- q-bio.MN
- q-bio.NC
- q-bio.OT
- q-bio.PE
- q-bio.QM
- q-bio.SC
- q-bio.TO
- q-bio_l1
- q-fin.CP
- q-fin.EC
- q-fin.GN
- q-fin.MF
- q-fin.PM
- q-fin.PR
- q-fin.RM
- q-fin.ST
- q-fin.TR
- q-fin_l1
- quant-ph
- solv-int
- stat.AP
- stat.CO
- stat.ME
- stat.ML
- stat.OT
- stat.TH
- stat_l1
- supr-con
supr-con
## Citation
Please cite this work if you found this data useful.
```bib
@article{reid2022m2d2,
title = {M2D2: A Massively Multi-domain Language Modeling Dataset},
author = {Machel Reid and Victor Zhong and Suchin Gururangan and Luke Zettlemoyer},
year = {2022},
journal = {arXiv preprint arXiv: Arxiv-2210.07370}
}
``` | machelreid/m2d2 | [
"license:cc-by-nc-4.0",
"arxiv:2210.07370",
"region:us"
] | 2022-10-18T14:14:07+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-25T11:57:24+00:00 | [
"2210.07370"
] | [] | TAGS
#license-cc-by-nc-4.0 #arxiv-2210.07370 #region-us
| # M2D2: A Massively Multi-domain Language Modeling Dataset
*From the paper "M2D2: A Massively Multi-domain Language Modeling Dataset", (Reid et al., EMNLP 2022)*
Load the dataset as follows:
## Domains
- Culture_and_the_arts
- Culture_and_the_arts__Culture_and_Humanities
- Culture_and_the_arts__Games_and_Toys
- Culture_and_the_arts__Mass_media
- Culture_and_the_arts__Performing_arts
- Culture_and_the_arts__Sports_and_Recreation
- Culture_and_the_arts__The_arts_and_Entertainment
- Culture_and_the_arts__Visual_arts
- General_referece
- General_referece__Further_research_tools_and_topics
- General_referece__Reference_works
- Health_and_fitness
- Health_and_fitness__Exercise
- Health_and_fitness__Health_science
- Health_and_fitness__Human_medicine
- Health_and_fitness__Nutrition
- Health_and_fitness__Public_health
- Health_and_fitness__Self_care
- History_and_events
- History_and_events__By_continent
- History_and_events__By_period
- History_and_events__By_region
- Human_activites
- Human_activites__Human_activities
- Human_activites__Impact_of_human_activity
- Mathematics_and_logic
- Mathematics_and_logic__Fields_of_mathematics
- Mathematics_and_logic__Logic
- Mathematics_and_logic__Mathematics
- Natural_and_physical_sciences
- Natural_and_physical_sciences__Biology
- Natural_and_physical_sciences__Earth_sciences
- Natural_and_physical_sciences__Nature
- Natural_and_physical_sciences__Physical_sciences
- Philosophy
- Philosophy_and_thinking
- Philosophy_and_thinking__Philosophy
- Philosophy_and_thinking__Thinking
- Religion_and_belief_systems
- Religion_and_belief_systems__Allah
- Religion_and_belief_systems__Belief_systems
- Religion_and_belief_systems__Major_beliefs_of_the_world
- Society_and_social_sciences
- Society_and_social_sciences__Social_sciences
- Society_and_social_sciences__Society
- Technology_and_applied_sciences
- Technology_and_applied_sciences__Agriculture
- Technology_and_applied_sciences__Computing
- Technology_and_applied_sciences__Engineering
- Technology_and_applied_sciences__Transport
- alg-geom
- ao-sci
- astro-ph
- astro-ph.CO
- astro-ph.EP
- astro-ph.GA
- astro-ph.HE
- astro-ph.IM
- astro-ph.SR
- astro-ph_l1
- atom-ph
- bayes-an
- chao-dyn
- chem-ph
- cmp-lg
- comp-gas
- cond-mat
- URL-nn
- URL-hall
- URL-sci
- URL
- URL-gas
- URL
- URL-mech
- URL-el
- URL-con
- cond-mat_l1
- cs.AI
- cs.AR
- cs.CC
- cs.CE
- cs.CG
- cs.CL
- cs.CR
- cs.CV
- cs.CY
- cs.DB
- cs.DC
- cs.DL
- cs.DM
- cs.DS
- cs.ET
- cs.FL
- cs.GL
- cs.GR
- cs.GT
- cs.HC
- cs.IR
- cs.IT
- cs.LG
- cs.LO
- cs.MA
- cs.MM
- cs.MS
- cs.NA
- cs.NE
- cs.NI
- cs.OH
- cs.OS
- cs.PF
- cs.PL
- cs.RO
- cs.SC
- cs.SD
- cs.SE
- cs.SI
- cs.SY
- cs_l1
- dg-ga
- econ.EM
- econ.GN
- econ.TH
- econ_l1
- eess.AS
- eess.IV
- eess.SP
- eess.SY
- eess_l1
- eval_sets
- funct-an
- gr-qc
- hep-ex
- hep-lat
- hep-ph
- hep-th
- math-ph
- math.AC
- math.AG
- math.AP
- math.AT
- math.CA
- math.CO
- math.CT
- math.CV
- math.DG
- math.DS
- math.FA
- math.GM
- math.GN
- math.GR
- math.GT
- math.HO
- math.IT
- math.KT
- math.LO
- math.MG
- math.MP
- math.NA
- math.NT
- math.OA
- math.OC
- math.PR
- math.QA
- math.RA
- math.RT
- math.SG
- math.SP
- math.ST
- math_l1
- mtrl-th
- nlin.AO
- nlin.CD
- nlin.CG
- nlin.PS
- nlin.SI
- nlin_l1
- nucl-ex
- nucl-th
- patt-sol
- URL-ph
- URL-ph
- URL-ph
- URL-clus
- URL-ph
- URL-ph
- URL-ph
- URL-ph
- URL-ph
- URL-an
- URL-ph
- URL-dyn
- URL-ph
- URL-ph
- URL-ph
- URL-det
- URL-ph
- URL
- URL-ph
- URL-ph
- URL-ph
- URL-ph
- physics_l1
- plasm-ph
- q-alg
- q-bio
- q-bio.BM
- q-bio.CB
- q-bio.GN
- q-bio.MN
- q-bio.NC
- q-bio.OT
- q-bio.PE
- q-bio.QM
- q-bio.SC
- q-bio.TO
- q-bio_l1
- q-fin.CP
- q-fin.EC
- q-fin.GN
- q-fin.MF
- q-fin.PM
- q-fin.PR
- q-fin.RM
- q-fin.ST
- q-fin.TR
- q-fin_l1
- quant-ph
- solv-int
- stat.AP
- stat.CO
- stat.ME
- stat.ML
- stat.OT
- stat.TH
- stat_l1
- supr-con
supr-con
Please cite this work if you found this data useful.
| [
"# M2D2: A Massively Multi-domain Language Modeling Dataset\n\n*From the paper \"M2D2: A Massively Multi-domain Language Modeling Dataset\", (Reid et al., EMNLP 2022)*\n\nLoad the dataset as follows:",
"## Domains\n - Culture_and_the_arts\n - Culture_and_the_arts__Culture_and_Humanities\n - Culture_and_the_arts__Games_and_Toys\n - Culture_and_the_arts__Mass_media\n - Culture_and_the_arts__Performing_arts\n - Culture_and_the_arts__Sports_and_Recreation\n - Culture_and_the_arts__The_arts_and_Entertainment\n - Culture_and_the_arts__Visual_arts\n - General_referece\n - General_referece__Further_research_tools_and_topics\n - General_referece__Reference_works\n - Health_and_fitness\n - Health_and_fitness__Exercise\n - Health_and_fitness__Health_science\n - Health_and_fitness__Human_medicine\n - Health_and_fitness__Nutrition\n - Health_and_fitness__Public_health\n - Health_and_fitness__Self_care\n - History_and_events\n - History_and_events__By_continent\n - History_and_events__By_period\n - History_and_events__By_region\n - Human_activites\n - Human_activites__Human_activities\n - Human_activites__Impact_of_human_activity\n - Mathematics_and_logic\n - Mathematics_and_logic__Fields_of_mathematics\n - Mathematics_and_logic__Logic\n - Mathematics_and_logic__Mathematics\n - Natural_and_physical_sciences\n - Natural_and_physical_sciences__Biology\n - Natural_and_physical_sciences__Earth_sciences\n - Natural_and_physical_sciences__Nature\n - Natural_and_physical_sciences__Physical_sciences\n - Philosophy\n - Philosophy_and_thinking\n - Philosophy_and_thinking__Philosophy\n - Philosophy_and_thinking__Thinking\n - Religion_and_belief_systems\n - Religion_and_belief_systems__Allah\n - Religion_and_belief_systems__Belief_systems\n - Religion_and_belief_systems__Major_beliefs_of_the_world\n - Society_and_social_sciences\n - Society_and_social_sciences__Social_sciences\n - Society_and_social_sciences__Society\n - Technology_and_applied_sciences\n - Technology_and_applied_sciences__Agriculture\n - Technology_and_applied_sciences__Computing\n - Technology_and_applied_sciences__Engineering\n - Technology_and_applied_sciences__Transport\n - alg-geom\n - ao-sci\n - astro-ph\n - astro-ph.CO\n - astro-ph.EP\n - astro-ph.GA\n - astro-ph.HE\n - astro-ph.IM\n - astro-ph.SR\n - astro-ph_l1\n - atom-ph\n - bayes-an\n - chao-dyn\n - chem-ph\n - cmp-lg\n - comp-gas\n - cond-mat\n - URL-nn\n - URL-hall\n - URL-sci\n - URL\n - URL-gas\n - URL\n - URL-mech\n - URL-el\n - URL-con\n - cond-mat_l1\n - cs.AI\n - cs.AR\n - cs.CC\n - cs.CE\n - cs.CG\n - cs.CL\n - cs.CR\n - cs.CV\n - cs.CY\n - cs.DB\n - cs.DC\n - cs.DL\n - cs.DM\n - cs.DS\n - cs.ET\n - cs.FL\n - cs.GL\n - cs.GR\n - cs.GT\n - cs.HC\n - cs.IR\n - cs.IT\n - cs.LG\n - cs.LO\n - cs.MA\n - cs.MM\n - cs.MS\n - cs.NA\n - cs.NE\n - cs.NI\n - cs.OH\n - cs.OS\n - cs.PF\n - cs.PL\n - cs.RO\n - cs.SC\n - cs.SD\n - cs.SE\n - cs.SI\n - cs.SY\n - cs_l1\n - dg-ga\n - econ.EM\n - econ.GN\n - econ.TH\n - econ_l1\n - eess.AS\n - eess.IV\n - eess.SP\n - eess.SY\n - eess_l1\n - eval_sets\n - funct-an\n - gr-qc\n - hep-ex\n - hep-lat\n - hep-ph\n - hep-th\n - math-ph\n - math.AC\n - math.AG\n - math.AP\n - math.AT\n - math.CA\n - math.CO\n - math.CT\n - math.CV\n - math.DG\n - math.DS\n - math.FA\n - math.GM\n - math.GN\n - math.GR\n - math.GT\n - math.HO\n - math.IT\n - math.KT\n - math.LO\n - math.MG\n - math.MP\n - math.NA\n - math.NT\n - math.OA\n - math.OC\n - math.PR\n - math.QA\n - math.RA\n - math.RT\n - math.SG\n - math.SP\n - math.ST\n - math_l1\n - mtrl-th\n - nlin.AO\n - nlin.CD\n - nlin.CG\n - nlin.PS\n - nlin.SI\n - nlin_l1\n - nucl-ex\n - nucl-th\n - patt-sol\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-clus\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-an\n - URL-ph\n - URL-dyn\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-det\n - URL-ph\n - URL\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-ph\n - physics_l1\n - plasm-ph\n - q-alg\n - q-bio\n - q-bio.BM\n - q-bio.CB\n - q-bio.GN\n - q-bio.MN\n - q-bio.NC\n - q-bio.OT\n - q-bio.PE\n - q-bio.QM\n - q-bio.SC\n - q-bio.TO\n - q-bio_l1\n - q-fin.CP\n - q-fin.EC\n - q-fin.GN\n - q-fin.MF\n - q-fin.PM\n - q-fin.PR\n - q-fin.RM\n - q-fin.ST\n - q-fin.TR\n - q-fin_l1\n - quant-ph\n - solv-int\n - stat.AP\n - stat.CO\n - stat.ME\n - stat.ML\n - stat.OT\n - stat.TH\n - stat_l1\n - supr-con\nsupr-con\n\nPlease cite this work if you found this data useful."
] | [
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2210.07370 #region-us \n",
"# M2D2: A Massively Multi-domain Language Modeling Dataset\n\n*From the paper \"M2D2: A Massively Multi-domain Language Modeling Dataset\", (Reid et al., EMNLP 2022)*\n\nLoad the dataset as follows:",
"## Domains\n - Culture_and_the_arts\n - Culture_and_the_arts__Culture_and_Humanities\n - Culture_and_the_arts__Games_and_Toys\n - Culture_and_the_arts__Mass_media\n - Culture_and_the_arts__Performing_arts\n - Culture_and_the_arts__Sports_and_Recreation\n - Culture_and_the_arts__The_arts_and_Entertainment\n - Culture_and_the_arts__Visual_arts\n - General_referece\n - General_referece__Further_research_tools_and_topics\n - General_referece__Reference_works\n - Health_and_fitness\n - Health_and_fitness__Exercise\n - Health_and_fitness__Health_science\n - Health_and_fitness__Human_medicine\n - Health_and_fitness__Nutrition\n - Health_and_fitness__Public_health\n - Health_and_fitness__Self_care\n - History_and_events\n - History_and_events__By_continent\n - History_and_events__By_period\n - History_and_events__By_region\n - Human_activites\n - Human_activites__Human_activities\n - Human_activites__Impact_of_human_activity\n - Mathematics_and_logic\n - Mathematics_and_logic__Fields_of_mathematics\n - Mathematics_and_logic__Logic\n - Mathematics_and_logic__Mathematics\n - Natural_and_physical_sciences\n - Natural_and_physical_sciences__Biology\n - Natural_and_physical_sciences__Earth_sciences\n - Natural_and_physical_sciences__Nature\n - Natural_and_physical_sciences__Physical_sciences\n - Philosophy\n - Philosophy_and_thinking\n - Philosophy_and_thinking__Philosophy\n - Philosophy_and_thinking__Thinking\n - Religion_and_belief_systems\n - Religion_and_belief_systems__Allah\n - Religion_and_belief_systems__Belief_systems\n - Religion_and_belief_systems__Major_beliefs_of_the_world\n - Society_and_social_sciences\n - Society_and_social_sciences__Social_sciences\n - Society_and_social_sciences__Society\n - Technology_and_applied_sciences\n - Technology_and_applied_sciences__Agriculture\n - Technology_and_applied_sciences__Computing\n - Technology_and_applied_sciences__Engineering\n - Technology_and_applied_sciences__Transport\n - alg-geom\n - ao-sci\n - astro-ph\n - astro-ph.CO\n - astro-ph.EP\n - astro-ph.GA\n - astro-ph.HE\n - astro-ph.IM\n - astro-ph.SR\n - astro-ph_l1\n - atom-ph\n - bayes-an\n - chao-dyn\n - chem-ph\n - cmp-lg\n - comp-gas\n - cond-mat\n - URL-nn\n - URL-hall\n - URL-sci\n - URL\n - URL-gas\n - URL\n - URL-mech\n - URL-el\n - URL-con\n - cond-mat_l1\n - cs.AI\n - cs.AR\n - cs.CC\n - cs.CE\n - cs.CG\n - cs.CL\n - cs.CR\n - cs.CV\n - cs.CY\n - cs.DB\n - cs.DC\n - cs.DL\n - cs.DM\n - cs.DS\n - cs.ET\n - cs.FL\n - cs.GL\n - cs.GR\n - cs.GT\n - cs.HC\n - cs.IR\n - cs.IT\n - cs.LG\n - cs.LO\n - cs.MA\n - cs.MM\n - cs.MS\n - cs.NA\n - cs.NE\n - cs.NI\n - cs.OH\n - cs.OS\n - cs.PF\n - cs.PL\n - cs.RO\n - cs.SC\n - cs.SD\n - cs.SE\n - cs.SI\n - cs.SY\n - cs_l1\n - dg-ga\n - econ.EM\n - econ.GN\n - econ.TH\n - econ_l1\n - eess.AS\n - eess.IV\n - eess.SP\n - eess.SY\n - eess_l1\n - eval_sets\n - funct-an\n - gr-qc\n - hep-ex\n - hep-lat\n - hep-ph\n - hep-th\n - math-ph\n - math.AC\n - math.AG\n - math.AP\n - math.AT\n - math.CA\n - math.CO\n - math.CT\n - math.CV\n - math.DG\n - math.DS\n - math.FA\n - math.GM\n - math.GN\n - math.GR\n - math.GT\n - math.HO\n - math.IT\n - math.KT\n - math.LO\n - math.MG\n - math.MP\n - math.NA\n - math.NT\n - math.OA\n - math.OC\n - math.PR\n - math.QA\n - math.RA\n - math.RT\n - math.SG\n - math.SP\n - math.ST\n - math_l1\n - mtrl-th\n - nlin.AO\n - nlin.CD\n - nlin.CG\n - nlin.PS\n - nlin.SI\n - nlin_l1\n - nucl-ex\n - nucl-th\n - patt-sol\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-clus\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-an\n - URL-ph\n - URL-dyn\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-det\n - URL-ph\n - URL\n - URL-ph\n - URL-ph\n - URL-ph\n - URL-ph\n - physics_l1\n - plasm-ph\n - q-alg\n - q-bio\n - q-bio.BM\n - q-bio.CB\n - q-bio.GN\n - q-bio.MN\n - q-bio.NC\n - q-bio.OT\n - q-bio.PE\n - q-bio.QM\n - q-bio.SC\n - q-bio.TO\n - q-bio_l1\n - q-fin.CP\n - q-fin.EC\n - q-fin.GN\n - q-fin.MF\n - q-fin.PM\n - q-fin.PR\n - q-fin.RM\n - q-fin.ST\n - q-fin.TR\n - q-fin_l1\n - quant-ph\n - solv-int\n - stat.AP\n - stat.CO\n - stat.ME\n - stat.ML\n - stat.OT\n - stat.TH\n - stat_l1\n - supr-con\nsupr-con\n\nPlease cite this work if you found this data useful."
] |
5bc518f5c3350f2c92f405e8223c982c8b9dc9f0 | This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset.
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across text entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`
The full process of this dataset creation, including which records were used to build it, is documented inside [cm4-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/blob/main/cm4-synthetic-testing.py)
| stas/cm4-synthetic-testing | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-10-18T15:08:16+00:00 | {"license": "bigscience-openrail-m"} | 2022-10-18T15:20:31+00:00 | [] | [] | TAGS
#license-bigscience-openrail-m #region-us
| This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset.
The current splits are: '['URL', 'URL', 'URL', 'URL', 'URL', 'URL', 'URL', 'URL']'.
The 'unique' ones ensure uniqueness across text entries.
The 'repeat' ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is 'URL'
The full process of this dataset creation, including which records were used to build it, is documented inside URL
| [] | [
"TAGS\n#license-bigscience-openrail-m #region-us \n"
] |
3b1395c1f2fa1e3432227828e5e917aefe3bade8 | This dataset is designed to be used in testing. It's derived from general-pmd/localized_narratives__ADE20k dataset
The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`.
The `unique` ones ensure uniqueness across `text` entries.
The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is `100.unique`
The full process of this dataset creation, including which records were used to build it, is documented inside [general-pmd-synthetic-testing.py](https://huggingface.co/datasets/HuggingFaceM4/general-pmd-synthetic-testing/blob/main/general-pmd-synthetic-testing.py)
| stas/general-pmd-synthetic-testing | [
"license:bigscience-openrail-m",
"region:us"
] | 2022-10-18T15:08:31+00:00 | {"license": "bigscience-openrail-m"} | 2022-10-18T15:21:21+00:00 | [] | [] | TAGS
#license-bigscience-openrail-m #region-us
| This dataset is designed to be used in testing. It's derived from general-pmd/localized_narratives__ADE20k dataset
The current splits are: '['URL', 'URL', 'URL', 'URL', 'URL', 'URL', 'URL', 'URL']'.
The 'unique' ones ensure uniqueness across 'text' entries.
The 'repeat' ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation.
The default split is 'URL'
The full process of this dataset creation, including which records were used to build it, is documented inside URL
| [] | [
"TAGS\n#license-bigscience-openrail-m #region-us \n"
] |
44bae37e90157953b355c34f478fcb528b436617 | # Dataset Card for "huggingface_hub-dependents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | open-source-metrics/huggingface_hub-dependents | [
"region:us"
] | 2022-10-18T16:42:53+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "null"}, {"name": "stars", "dtype": "null"}, {"name": "forks", "dtype": "null"}], "splits": [{"name": "package"}, {"name": "repository"}], "download_size": 1798, "dataset_size": 0}} | 2024-02-16T18:19:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "huggingface_hub-dependents"
More Information needed | [
"# Dataset Card for \"huggingface_hub-dependents\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"huggingface_hub-dependents\"\n\nMore Information needed"
] |
98c965d524897e68b0601d52308e8be096832be1 | # Dataset Card for "hub-docs-dependents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | open-source-metrics/hub-docs-dependents | [
"region:us"
] | 2022-10-18T16:43:02+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "null"}, {"name": "stars", "dtype": "null"}, {"name": "forks", "dtype": "null"}], "splits": [{"name": "package"}, {"name": "repository"}], "download_size": 1798, "dataset_size": 0}} | 2024-02-16T18:08:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "hub-docs-dependents"
More Information needed | [
"# Dataset Card for \"hub-docs-dependents\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"hub-docs-dependents\"\n\nMore Information needed"
] |
7e1f136ac970901e3c0f3e7d3c1a767c15f20a31 | SNOMED-CT-Code-Value-Semantic-Set.csv | awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv | [
"license:mit",
"region:us"
] | 2022-10-18T17:19:41+00:00 | {"license": "mit"} | 2022-10-29T11:42:02+00:00 | [] | [] | TAGS
#license-mit #region-us
| URL | [] | [
"TAGS\n#license-mit #region-us \n"
] |
64caf7a2bb26ccdd78697ba041707e394f07ff1b | eCQM-Code-Value-Semantic-Set.csv | awacke1/eCQM-Code-Value-Semantic-Set.csv | [
"license:mit",
"region:us"
] | 2022-10-18T17:48:30+00:00 | {"license": "mit"} | 2022-10-29T11:40:54+00:00 | [] | [] | TAGS
#license-mit #region-us
| URL | [] | [
"TAGS\n#license-mit #region-us \n"
] |
2e48dde411fd4172fa5196bfe0f6aa1ad75f20d5 | # Dataset Card for BBBicycles
## Dataset Summary
Bent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of **damaged object re-identification**, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview [here](https://huggingface.co/spaces/GrainsPolito/BBBicyclesPreview).
## Dataset Structure
The final dataset contains:
- Total of 39,200 image
- 2,800 unique IDs
- 20 models
- 140 IDs for each model
<table border-collapse="collapse">
<tr>
<td><b style="font-size:25px">Information for each ID:</b></td>
<td><b style="font-size:25px">Information for each render:</b></td>
</tr>
<tr>
<td>
<ul>
<li>Model</li>
<li>Type</li>
<li>Texture type</li>
<li>Stickers</li>
</ul>
</td>
<td>
<ul>
<li>Background</li>
<li>Viewing Side</li>
<li>Focal Length</li>
<li>Presence of dirt</li>
</ul>
</td>
</tr>
</table>
### Citation Information
```
@inproceedings{bbb_2022,
title={Bent & Broken Bicycles: Leveraging synthetic data for damaged object re-identification},
author={Luca Piano, Filippo Gabriele Pratticò, Alessandro Sebastian Russo, Lorenzo Lanari, Lia Morra, Fabrizio Lamberti},
booktitle={2022 IEEE Winter Conference on Applications of Computer Vision (WACV)},
year={2022},
organization={IEEE}
}
```
### Credits
The authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni. | GrainsPolito/BBBicycles | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-18T18:05:32+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-20T10:14:59+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
| # Dataset Card for BBBicycles
## Dataset Summary
Bent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of damaged object re-identification, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview here.
## Dataset Structure
The final dataset contains:
- Total of 39,200 image
- 2,800 unique IDs
- 20 models
- 140 IDs for each model
<table border-collapse="collapse">
<tr>
<td><b style="font-size:25px">Information for each ID:</b></td>
<td><b style="font-size:25px">Information for each render:</b></td>
</tr>
<tr>
<td>
<ul>
<li>Model</li>
<li>Type</li>
<li>Texture type</li>
<li>Stickers</li>
</ul>
</td>
<td>
<ul>
<li>Background</li>
<li>Viewing Side</li>
<li>Focal Length</li>
<li>Presence of dirt</li>
</ul>
</td>
</tr>
</table>
### Credits
The authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni. | [
"# Dataset Card for BBBicycles",
"## Dataset Summary\nBent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of damaged object re-identification, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview here.",
"## Dataset Structure\nThe final dataset contains:\n\n - Total of 39,200 image\n - 2,800 unique IDs\n - 20 models\n - 140 IDs for each model\n\n<table border-collapse=\"collapse\">\n <tr>\n <td><b style=\"font-size:25px\">Information for each ID:</b></td>\n <td><b style=\"font-size:25px\">Information for each render:</b></td>\n </tr>\n <tr>\n <td>\n <ul>\n <li>Model</li>\n <li>Type</li>\n <li>Texture type</li>\n <li>Stickers</li>\n </ul>\n </td>\n <td>\n <ul>\n <li>Background</li>\n <li>Viewing Side</li>\n <li>Focal Length</li>\n <li>Presence of dirt</li>\n </ul>\n </td>\n </tr>\n</table>",
"### Credits\nThe authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni."
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for BBBicycles",
"## Dataset Summary\nBent & Broken Bicycles (BBBicycles) dataset is a benchmark set for the novel task of damaged object re-identification, which aims to identify the same object in multiple images even in the presence of breaks, deformations, and missing parts. You can find an interactive preview here.",
"## Dataset Structure\nThe final dataset contains:\n\n - Total of 39,200 image\n - 2,800 unique IDs\n - 20 models\n - 140 IDs for each model\n\n<table border-collapse=\"collapse\">\n <tr>\n <td><b style=\"font-size:25px\">Information for each ID:</b></td>\n <td><b style=\"font-size:25px\">Information for each render:</b></td>\n </tr>\n <tr>\n <td>\n <ul>\n <li>Model</li>\n <li>Type</li>\n <li>Texture type</li>\n <li>Stickers</li>\n </ul>\n </td>\n <td>\n <ul>\n <li>Background</li>\n <li>Viewing Side</li>\n <li>Focal Length</li>\n <li>Presence of dirt</li>\n </ul>\n </td>\n </tr>\n</table>",
"### Credits\nThe authors gratefully acknowledge the financial support of Reale Mutua Assicurazioni."
] |
f53f236e7cef0060169e534d6125f3c7d949a0f2 | LOINC-CodeSet-Value-Description.csv | awacke1/LOINC-CodeSet-Value-Description.csv | [
"license:mit",
"region:us"
] | 2022-10-18T18:08:21+00:00 | {"license": "mit"} | 2022-10-29T11:43:25+00:00 | [] | [] | TAGS
#license-mit #region-us
| URL | [] | [
"TAGS\n#license-mit #region-us \n"
] |
b3c1ead7e05c84f8605ed4cae91199639940046a | # Dataset Card for "ott-qa-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://github.com/wenhuchen/OTT-QA) | ashraq/ott-qa-20k | [
"region:us"
] | 2022-10-18T18:30:29+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "header", "sequence": "string"}, {"name": "data", "sequence": {"sequence": "string"}}, {"name": "section_title", "dtype": "string"}, {"name": "section_text", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "intro", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41038376, "num_examples": 20000}], "download_size": 23329221, "dataset_size": 41038376}} | 2022-10-21T08:06:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ott-qa-20k"
More Information needed
The data was obtained from here | [
"# Dataset Card for \"ott-qa-20k\"\n\nMore Information needed\n\nThe data was obtained from here"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ott-qa-20k\"\n\nMore Information needed\n\nThe data was obtained from here"
] |
4027643008b46f5ef6e4ed9529a236d9d17a9777 | # Arcane Diffusion Dataset
Dataset containing the 75 images used to train the [Arcane Diffusion](https://huggingface.co/nitrosocke/Arcane-Diffusion) model.
Settings for training:
```class prompt: illustration style
instance prompt: illustration arcane style
learning rate: 5e-6
lr scheduler: constant
num class images: 1000
max train steps: 5000
``` | nitrosocke/arcane-diffusion-dataset | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-18T19:47:20+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-18T19:58:23+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| # Arcane Diffusion Dataset
Dataset containing the 75 images used to train the Arcane Diffusion model.
Settings for training:
| [
"# Arcane Diffusion Dataset\nDataset containing the 75 images used to train the Arcane Diffusion model.\n\nSettings for training:"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"# Arcane Diffusion Dataset\nDataset containing the 75 images used to train the Arcane Diffusion model.\n\nSettings for training:"
] |
65312b1ae759cb963e15e67d641b76c975f2da5b | This dataset contains two subsets: NEG-136-SIMP and NEG-136-NAT. NEG-136-SIMP items come from Fischler et al. (1983). NEG-136-NAT items come from Nieuwland & Kuperberg (2008).
The `NEG-136-SIMP.tsv` and `NEG-136-NAT.tsv` files contain for each item the affirmative and negative version of the context (context_aff, context_neg), and completions that are true with the affirmative context (target_aff) and with the negative context (target_neg).
* For NEG-136-SIMP, determiners (*a*/*an*) are left ambiguous, and need to be selected based on the completion noun (this is done already in `proc_datasets.py`).
**References**:
* Ira Fischler, Paul A Bloom, Donald G Childers, Salim E Roucos, and Nathan W Perry Jr. 1983. *Brain potentials related to stages of sentence verification.*
* Mante S Nieuwland and Gina R Kuperberg. 2008. *When the truth is not too hard to handle: An event-related potential study on the pragmatics of negation.*
| joey234/neg-136 | [
"region:us"
] | 2022-10-18T22:29:36+00:00 | {} | 2022-10-18T22:30:32+00:00 | [] | [] | TAGS
#region-us
| This dataset contains two subsets: NEG-136-SIMP and NEG-136-NAT. NEG-136-SIMP items come from Fischler et al. (1983). NEG-136-NAT items come from Nieuwland & Kuperberg (2008).
The 'URL' and 'URL' files contain for each item the affirmative and negative version of the context (context_aff, context_neg), and completions that are true with the affirmative context (target_aff) and with the negative context (target_neg).
* For NEG-136-SIMP, determiners (*a*/*an*) are left ambiguous, and need to be selected based on the completion noun (this is done already in 'proc_datasets.py').
References:
* Ira Fischler, Paul A Bloom, Donald G Childers, Salim E Roucos, and Nathan W Perry Jr. 1983. *Brain potentials related to stages of sentence verification.*
* Mante S Nieuwland and Gina R Kuperberg. 2008. *When the truth is not too hard to handle: An event-related potential study on the pragmatics of negation.*
| [] | [
"TAGS\n#region-us \n"
] |
0a578cf440b735e43aae48aee71e470b7274f095 | # Dataset Card for "laion-2b-vietnamese-subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | truongpdd/laion-2b-vietnamese-subset | [
"region:us"
] | 2022-10-19T01:36:05+00:00 | {"dataset_info": {"features": [{"name": "SAMPLE_ID", "dtype": "int64"}, {"name": "URL", "dtype": "string"}, {"name": "TEXT", "dtype": "string"}, {"name": "HEIGHT", "dtype": "int32"}, {"name": "WIDTH", "dtype": "int32"}, {"name": "LICENSE", "dtype": "string"}, {"name": "LANGUAGE", "dtype": "string"}, {"name": "NSFW", "dtype": "string"}, {"name": "similarity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 10669843542.009588, "num_examples": 48169285}], "download_size": 7285732213, "dataset_size": 10669843542.009588}} | 2022-10-19T05:09:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "laion-2b-vietnamese-subset"
More Information needed | [
"# Dataset Card for \"laion-2b-vietnamese-subset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"laion-2b-vietnamese-subset\"\n\nMore Information needed"
] |
d1d88d27e2e912c28703113bc3ddcdc86211e8bc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-cnn_dailymail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@DongfuTingle](https://huggingface.co/DongfuTingle) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-2bc9e0-1812262541 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-19T04:49:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": ["bleu"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-10-19T06:36:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-cnn_dailymail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @DongfuTingle for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @DongfuTingle for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-cnn_dailymail\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @DongfuTingle for evaluating this model."
] |
70eadbf169fd5ab7249fc8fcebced984b1b0f1de | # Dataset Card for "CelebA-faces-cropped-128"
Just a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: https://colab.research.google.com/drive/1-P5mKb5VEQrzCmpx5QWomlq0-WNXaSxn?usp=sharing
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tglcourse/CelebA-faces-cropped-128 | [
"region:us"
] | 2022-10-19T05:00:14+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "test", "num_bytes": 274664364.23, "num_examples": 10130}, {"name": "train", "num_bytes": 5216078696.499, "num_examples": 192469}], "download_size": 0, "dataset_size": 5490743060.729}} | 2022-10-19T09:36:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "CelebA-faces-cropped-128"
Just a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: URL
More Information needed | [
"# Dataset Card for \"CelebA-faces-cropped-128\"\n\nJust a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: URL\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"CelebA-faces-cropped-128\"\n\nJust a 128px version of the CelebA-faces dataset, which I've cropped to the face regions using dlib. Processing notebook: URL\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.