sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
01b5203a600c3bde5dbf229adee63962608e0714 | # Dataset Card for "text_summarization_dataset3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset3 | [
"region:us"
] | 2022-11-01T02:15:46+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123296943, "num_examples": 103365}], "download_size": 41220771, "dataset_size": 123296943}} | 2022-11-01T02:15:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "text_summarization_dataset3"
More Information needed | [
"# Dataset Card for \"text_summarization_dataset3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_summarization_dataset3\"\n\nMore Information needed"
] |
a4910c6c1646eacfcb88f7703e2e0bd7fdee559c | # Dataset Card for "text_summarization_dataset4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset4 | [
"region:us"
] | 2022-11-01T02:16:12+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111909333, "num_examples": 87633}], "download_size": 38273895, "dataset_size": 111909333}} | 2022-11-01T02:16:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "text_summarization_dataset4"
More Information needed | [
"# Dataset Card for \"text_summarization_dataset4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_summarization_dataset4\"\n\nMore Information needed"
] |
945ac8484e1efc07ad26996071343822dad8dc3b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MHassanSaleem](https://huggingface.co/MHassanSaleem) for evaluating this model. | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-cadd10-1947965536 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-01T02:40:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "123tarunanand/roberta-base-finetuned", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-01T02:41:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @MHassanSaleem for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @MHassanSaleem for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @MHassanSaleem for evaluating this model."
] |
16a66c3fda4c2dbb68195d70bf51148d3edb86cf |
# CTKFacts dataset for Document retrieval
Czech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the [CsFEVER andCTKFacts: Acquiring Czech data for Fact Verification](https://arxiv.org/abs/2201.11115) paper currently being revised for publication in LREV journal.
## NLI version
Can be found at https://huggingface.co/datasets/ctu-aic/ctkfacts_nli | ctu-aic/ctkfacts | [
"license:cc-by-sa-3.0",
"arxiv:2201.11115",
"region:us"
] | 2022-11-01T06:36:40+00:00 | {"license": "cc-by-sa-3.0"} | 2022-11-01T06:47:03+00:00 | [
"2201.11115"
] | [] | TAGS
#license-cc-by-sa-3.0 #arxiv-2201.11115 #region-us
|
# CTKFacts dataset for Document retrieval
Czech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the CsFEVER andCTKFacts: Acquiring Czech data for Fact Verification paper currently being revised for publication in LREV journal.
## NLI version
Can be found at URL | [
"# CTKFacts dataset for Document retrieval\n\nCzech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the CsFEVER andCTKFacts: Acquiring Czech data for Fact Verification paper currently being revised for publication in LREV journal.",
"## NLI version\nCan be found at URL"
] | [
"TAGS\n#license-cc-by-sa-3.0 #arxiv-2201.11115 #region-us \n",
"# CTKFacts dataset for Document retrieval\n\nCzech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the CsFEVER andCTKFacts: Acquiring Czech data for Fact Verification paper currently being revised for publication in LREV journal.",
"## NLI version\nCan be found at URL"
] |
31504c14df60081992b939f8acab2762d4fb0ad8 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| Hallalay/TAiPET | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:other-my-multilinguality",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:unknown",
"Wallpaper",
"StableDiffusion",
"img2img",
"region:us"
] | 2022-11-01T08:41:06+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": [], "license": ["unknown"], "multilinguality": ["other-my-multilinguality"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "TAiPET", "tags": ["Wallpaper", "StableDiffusion", "img2img"]} | 2022-11-09T19:59:17+00:00 | [] | [] | TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-other-my-multilinguality #size_categories-1K<n<10K #source_datasets-original #license-unknown #Wallpaper #StableDiffusion #img2img #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-other-my-multilinguality #size_categories-1K<n<10K #source_datasets-original #license-unknown #Wallpaper #StableDiffusion #img2img #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
6f8ce801f8cf4cc9d58c08f61f3424ad612f2f67 |
# HoC : Hallmarks of Cancer Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://s-baker.net/resource/hoc/
- **Repository:** https://github.com/sb895/Hallmarks-of-Cancer
- **Paper:** https://academic.oup.com/bioinformatics/article/32/3/432/1743783
- **Leaderboard:** https://paperswithcode.com/dataset/hoc-1
- **Point of Contact:** [Yanis Labrak](mailto:[email protected])
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the [Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/) which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `multi-class-classification`.
### Languages
The corpora consists of PubMed article only in english:
- `English - United States (en-US)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/HoC")
validation = dataset["validation"]
print("First element of the validation set : ", validation[0])
```
## Dataset Structure
### Data Instances
```json
{
"document_id": "12634122_5",
"text": "Genes that were overexpressed in OM3 included oncogenes , cell cycle regulators , and those involved in signal transduction , whereas genes for DNA repair enzymes and inhibitors of transformation and metastasis were suppressed .",
"label": [9, 5, 0, 6]
}
```
### Data Fields
`document_id`: Unique identifier of the document.
`text`: Raw text of the PubMed abstracts.
`label`: One of the 10 currently known hallmarks of cancer.
| Hallmark | Search term |
|:-------------------------------------------:|:-------------------------------------------:|
| 1. Sustaining proliferative signaling (PS) | Proliferation Receptor Cancer |
| | 'Growth factor' Cancer |
| | 'Cell cycle' Cancer |
| 2. Evading growth suppressors (GS) | 'Cell cycle' Cancer |
| | 'Contact inhibition' |
| 3. Resisting cell death (CD) | Apoptosis Cancer |
| | Necrosis Cancer |
| | Autophagy Cancer |
| 4. Enabling replicative immortality (RI) | Senescence Cancer |
| | Immortalization Cancer |
| 5. Inducing angiogenesis (A) | Angiogenesis Cancer |
| | 'Angiogenic factor' |
| 6. Activating invasion & metastasis (IM) | Metastasis Invasion Cancer |
| 7. Genome instability & mutation (GI) | Mutation Cancer |
| | 'DNA repair' Cancer |
| | Adducts Cancer |
| | 'Strand breaks' Cancer |
| | 'DNA damage' Cancer |
| 8. Tumor-promoting inflammation (TPI) | Inflammation Cancer |
| | 'Oxidative stress' Cancer |
| | Inflammation 'Immune response' Cancer |
| 9. Deregulating cellular energetics (CE) | Glycolysis Cancer; 'Warburg effect' Cancer |
| 10. Avoiding immune destruction (ID) | 'Immune system' Cancer |
| | Immunosuppression Cancer |
### Data Splits
Distribution of data for the 10 hallmarks:
| **Hallmark** | **No. abstracts** | **No. sentences** |
|:------------:|:-----------------:|:-----------------:|
| 1. PS | 462 | 993 |
| 2. GS | 242 | 468 |
| 3. CD | 430 | 883 |
| 4. RI | 115 | 295 |
| 5. A | 143 | 357 |
| 6. IM | 291 | 667 |
| 7. GI | 333 | 771 |
| 8. TPI | 194 | 437 |
| 9. CE | 105 | 213 |
| 10. ID | 108 | 226 |
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__HoC__: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
GNU General Public License v3.0
```
```plain
Permissions
- Commercial use
- Modification
- Distribution
- Patent use
- Private use
Limitations
- Liability
- Warranty
Conditions
- License and copyright notice
- State changes
- Disclose source
- Same license
```
### Citation Information
We would very much appreciate it if you cite our publications:
[Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783)
```bibtex
@article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
```
[Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer](https://www.repository.cam.ac.uk/bitstream/handle/1810/265268/btx454.pdf?sequence=8&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
```
[Cancer hallmark text classification using convolutional neural networks](https://www.repository.cam.ac.uk/bitstream/handle/1810/270037/BIOTXTM2016.pdf?sequence=1&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
```
[Initializing neural networks for hierarchical multi-label text classification](http://www.aclweb.org/anthology/W17-2339)
```bibtex
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
}
```
| qanastek/HoC | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-11-01T10:49:52+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["found"], "language": ["en"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "HoC", "language_bcp47": ["en-US"]} | 2022-11-01T15:03:11+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-found #size_categories-1K<n<10K #source_datasets-original #language-English #region-us
| HoC : Hallmarks of Cancer Corpus
================================
Table of Contents
-----------------
* [Dataset Card for](#dataset-card-for-needs-more-information)
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- No Warranty
- Citation Information
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard: URL
* Point of Contact: Yanis Labrak
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the Cancer Hallmarks Analytics Tool which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for 'multi-class-classification'.
### Languages
The corpora consists of PubMed article only in english:
* 'English - United States (en-US)'
Load the dataset with HuggingFace
---------------------------------
Dataset Structure
-----------------
### Data Instances
### Data Fields
'document\_id': Unique identifier of the document.
'text': Raw text of the PubMed abstracts.
'label': One of the 10 currently known hallmarks of cancer.
### Data Splits
Distribution of data for the 10 hallmarks:
Dataset Creation
----------------
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
Additional Information
----------------------
### Dataset Curators
**HoC**: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
**Hugging Face**: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
We would very much appreciate it if you cite our publications:
Automatic semantic classification of scientific literature according to the hallmarks of cancer
Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer
Cancer hallmark text classification using convolutional neural networks
Initializing neural networks for hierarchical multi-label text classification
| [
"### Dataset Summary\n\n\nThe Hallmarks of Cancer Corpus for text classification\n\n\nThe Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the \"labels\" directory, while the tokenized text can be found under \"text\" directory. The filenames are the corresponding PubMed IDs (PMID).\n\n\nIn addition to the HOC corpus, we also have the Cancer Hallmarks Analytics Tool which classifes all of PubMed according to the HoC taxonomy.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for 'multi-class-classification'.",
"### Languages\n\n\nThe corpora consists of PubMed article only in english:\n\n\n* 'English - United States (en-US)'\n\n\nLoad the dataset with HuggingFace\n---------------------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n'document\\_id': Unique identifier of the document.\n\n\n'text': Raw text of the PubMed abstracts.\n\n\n'label': One of the 10 currently known hallmarks of cancer.",
"### Data Splits\n\n\nDistribution of data for the 10 hallmarks:\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Who are the source language producers?\n\n\nThe corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.",
"### Personal and Sensitive Information\n\n\nThe corpora is free of personal or sensitive information.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n**HoC**: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna\n\n\n**Hugging Face**: Labrak Yanis (Not affiliated with the original corpus)",
"### Licensing Information\n\n\nWe would very much appreciate it if you cite our publications:\n\n\nAutomatic semantic classification of scientific literature according to the hallmarks of cancer\n\n\nCancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer\n\n\nCancer hallmark text classification using convolutional neural networks\n\n\nInitializing neural networks for hierarchical multi-label text classification"
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-found #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n",
"### Dataset Summary\n\n\nThe Hallmarks of Cancer Corpus for text classification\n\n\nThe Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the \"labels\" directory, while the tokenized text can be found under \"text\" directory. The filenames are the corresponding PubMed IDs (PMID).\n\n\nIn addition to the HOC corpus, we also have the Cancer Hallmarks Analytics Tool which classifes all of PubMed according to the HoC taxonomy.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for 'multi-class-classification'.",
"### Languages\n\n\nThe corpora consists of PubMed article only in english:\n\n\n* 'English - United States (en-US)'\n\n\nLoad the dataset with HuggingFace\n---------------------------------\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n'document\\_id': Unique identifier of the document.\n\n\n'text': Raw text of the PubMed abstracts.\n\n\n'label': One of the 10 currently known hallmarks of cancer.",
"### Data Splits\n\n\nDistribution of data for the 10 hallmarks:\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Who are the source language producers?\n\n\nThe corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.",
"### Personal and Sensitive Information\n\n\nThe corpora is free of personal or sensitive information.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n**HoC**: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna\n\n\n**Hugging Face**: Labrak Yanis (Not affiliated with the original corpus)",
"### Licensing Information\n\n\nWe would very much appreciate it if you cite our publications:\n\n\nAutomatic semantic classification of scientific literature according to the hallmarks of cancer\n\n\nCancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer\n\n\nCancer hallmark text classification using convolutional neural networks\n\n\nInitializing neural networks for hierarchical multi-label text classification"
] |
d9197eacfb0afff29d90a2d4e7d0d98a5dfb54bc | # Dataset Card for sova_rudevices
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SOVA RuDevices](https://github.com/sovaai/sova-dataset)
- **Repository:** [SOVA Dataset](https://github.com/sovaai/sova-dataset)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [SOVA.ai](mailto:[email protected])
### Dataset Summary
SOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by [SOVA.ai team](https://github.com/sovaai).
Authors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': '/home/bond005/datasets/sova_rudevices/data/train/00003ec0-1257-42d1-b475-db1cd548092e.wav',
'array': array([ 0.00787354, 0.00735474, 0.00714111, ...,
-0.00018311, -0.00015259, -0.00018311]), dtype=float32),
'sampling_rate': 16000},
'transcription': 'мне получше стало'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory `0`, and the test split is based on the subdirectory `1` of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 81607 | 5835 | 5799 |
| hours | 82.4h | 5.9h | 5.8h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Egor Zubarev, Timofey Moskalets, and SOVA.ai team.
### Licensing Information
[Creative Commons BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{sova2021rudevices,
author = {Zubarev, Egor and Moskalets, Timofey and SOVA.ai},
title = {SOVA RuDevices Dataset: free public STT/ASR dataset with manually annotated live speech},
publisher = {GitHub},
journal = {GitHub repository},
year = {2021},
howpublished = {\url{https://github.com/sovaai/sova-dataset}},
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset. | bond005/sova_rudevices | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"language:ru",
"license:cc-by-4.0",
"region:us"
] | 2022-11-01T13:03:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "pretty_name": "RuDevices"} | 2022-11-01T15:59:30+00:00 | [] | [
"ru"
] | TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended #language-Russian #license-cc-by-4.0 #region-us
| Dataset Card for sova\_rudevices
================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: SOVA RuDevices
* Repository: SOVA Dataset
* Leaderboard: The Speech Bench
* Point of Contact: URL
### Dataset Summary
SOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by URL team.
Authors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.
### Supported Tasks and Leaderboards
* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.
### Data Fields
* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
* transcription: the transcription of the audio file.
### Data Splits
This dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory '0', and the test split is based on the subdirectory '1' of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
All recorded audio files were manually annotated.
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
The dataset was initially created by Egor Zubarev, Timofey Moskalets, and URL team.
### Licensing Information
Creative Commons BY 4.0
### Contributions
Thanks to @bond005 for adding this dataset.
| [
"### Dataset Summary\n\n\nSOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by URL team.\n\n\nAuthors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.",
"### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.",
"### Languages\n\n\nThe audio is in Russian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* transcription: the transcription of the audio file.",
"### Data Splits\n\n\nThis dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory '0', and the test split is based on the subdirectory '1' of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nAll recorded audio files were manually annotated.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created by Egor Zubarev, Timofey Moskalets, and URL team.",
"### Licensing Information\n\n\nCreative Commons BY 4.0",
"### Contributions\n\n\nThanks to @bond005 for adding this dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended #language-Russian #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nSOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by URL team.\n\n\nAuthors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.",
"### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.",
"### Languages\n\n\nThe audio is in Russian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* transcription: the transcription of the audio file.",
"### Data Splits\n\n\nThis dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory '0', and the test split is based on the subdirectory '1' of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nAll recorded audio files were manually annotated.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created by Egor Zubarev, Timofey Moskalets, and URL team.",
"### Licensing Information\n\n\nCreative Commons BY 4.0",
"### Contributions\n\n\nThanks to @bond005 for adding this dataset."
] |
d278dfd8a801d43f5f3ce23228118d8d53faca81 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
400000/50000/50000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v3 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-11-01T13:06:19+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-11-01T13:15:47+00:00 | [] | [
"pt"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us
| # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
400000/50000/50000
### Contributions
@rufimelo99
| [
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE\nExtended version of rufimelo/PortugueseLegalSentences-v1\n\n400000/50000/50000",
"### Contributions\n@rufimelo99"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us \n",
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE\nExtended version of rufimelo/PortugueseLegalSentences-v1\n\n400000/50000/50000",
"### Contributions\n@rufimelo99"
] |
9fe1c98602d295a0e7bc5bb628769d1e71e22be7 |
# MNLI Norwegian
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a [HuggingFace version](https://huggingface.co/datasets/multi_nli) of the dataset available.
This dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo.
Please note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used.
### The following datasets are available in the repo:
* mnli_no_en_for_simcse.csv
* mnli_no_en_small_for_simcse.csv
* mnli_no_for_simcse.csv
* multinli_1.0_dev_matched_no_mt.jsonl
* multinli_1.0_dev_mismatched_no_mt.jsonl
* multinli_1.0_train_no_mt.jsonl
* nli_for_simcse.csv
* xnli_dev_no_mt.jsonl
* xnli_test_no_mt.jsonl
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses.
### Citation Information
The datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work:
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
| NbAiLab/mnli-norwegian | [
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"language:no",
"language:nob",
"language:en",
"license:apache-2.0",
"norwegian",
"simcse",
"mnli",
"nli",
"sentence",
"region:us"
] | 2022-11-01T14:53:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated", "expert-generated"], "language": ["no", "nob", "en"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["sentence-similarity", "text-classification"], "task_ids": ["natural-language-inference", "semantic-similarity-classification"], "pretty_name": "MNLI Norwegian", "tags": ["norwegian", "simcse", "mnli", "nli", "sentence"]} | 2022-11-23T09:45:12+00:00 | [] | [
"no",
"nob",
"en"
] | TAGS
#task_categories-sentence-similarity #task_categories-text-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-classification #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #language-Norwegian #language-Norwegian Bokmål #language-English #license-apache-2.0 #norwegian #simcse #mnli #nli #sentence #region-us
|
# MNLI Norwegian
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a HuggingFace version of the dataset available.
This dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo.
Please note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used.
### The following datasets are available in the repo:
* mnli_no_en_for_simcse.csv
* mnli_no_en_small_for_simcse.csv
* mnli_no_for_simcse.csv
* multinli_1.0_dev_matched_no_mt.jsonl
* multinli_1.0_dev_mismatched_no_mt.jsonl
* multinli_1.0_train_no_mt.jsonl
* nli_for_simcse.csv
* xnli_dev_no_mt.jsonl
* xnli_test_no_mt.jsonl
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses.
The datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work:
'''
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "URL
}
| [
"# MNLI Norwegian\nThe Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a HuggingFace version of the dataset available. \n\nThis dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo. \n\nPlease note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used.",
"### The following datasets are available in the repo:\n\n* mnli_no_en_for_simcse.csv\n* mnli_no_en_small_for_simcse.csv\n* mnli_no_for_simcse.csv\n* multinli_1.0_dev_matched_no_mt.jsonl\n* multinli_1.0_dev_mismatched_no_mt.jsonl\n* multinli_1.0_train_no_mt.jsonl\n* nli_for_simcse.csv\n* xnli_dev_no_mt.jsonl\n* xnli_test_no_mt.jsonl",
"### Licensing Information\nThe majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses.\n\n\n\nThe datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work:\n\n'''\n@InProceedings{N18-1101,\n author = \"Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel\",\n title = \"A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference\",\n booktitle = \"Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n pages = \"1112--1122\",\n location = \"New Orleans, Louisiana\",\n url = \"URL\n}"
] | [
"TAGS\n#task_categories-sentence-similarity #task_categories-text-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-classification #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #language-Norwegian #language-Norwegian Bokmål #language-English #license-apache-2.0 #norwegian #simcse #mnli #nli #sentence #region-us \n",
"# MNLI Norwegian\nThe Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a HuggingFace version of the dataset available. \n\nThis dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo. \n\nPlease note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used.",
"### The following datasets are available in the repo:\n\n* mnli_no_en_for_simcse.csv\n* mnli_no_en_small_for_simcse.csv\n* mnli_no_for_simcse.csv\n* multinli_1.0_dev_matched_no_mt.jsonl\n* multinli_1.0_dev_mismatched_no_mt.jsonl\n* multinli_1.0_train_no_mt.jsonl\n* nli_for_simcse.csv\n* xnli_dev_no_mt.jsonl\n* xnli_test_no_mt.jsonl",
"### Licensing Information\nThe majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses.\n\n\n\nThe datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work:\n\n'''\n@InProceedings{N18-1101,\n author = \"Williams, Adina\n and Nangia, Nikita\n and Bowman, Samuel\",\n title = \"A Broad-Coverage Challenge Corpus for\n Sentence Understanding through Inference\",\n booktitle = \"Proceedings of the 2018 Conference of\n the North American Chapter of the\n Association for Computational Linguistics:\n Human Language Technologies, Volume 1 (Long\n Papers)\",\n year = \"2018\",\n publisher = \"Association for Computational Linguistics\",\n pages = \"1112--1122\",\n location = \"New Orleans, Louisiana\",\n url = \"URL\n}"
] |
3859c76db2f6f3d3b9a3863345e3ccdbff75879d | # Dataset Card for "fashion-product-images-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-small) | ashraq/fashion-product-images-small | [
"region:us"
] | 2022-11-01T20:22:50+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "gender", "dtype": "string"}, {"name": "masterCategory", "dtype": "string"}, {"name": "subCategory", "dtype": "string"}, {"name": "articleType", "dtype": "string"}, {"name": "baseColour", "dtype": "string"}, {"name": "season", "dtype": "string"}, {"name": "year", "dtype": "float64"}, {"name": "usage", "dtype": "string"}, {"name": "productDisplayName", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 546202015.44, "num_examples": 44072}], "download_size": 271496441, "dataset_size": 546202015.44}} | 2022-11-01T20:25:52+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fashion-product-images-small"
More Information needed
Data was obtained from here | [
"# Dataset Card for \"fashion-product-images-small\"\n\nMore Information needed\n\nData was obtained from here"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fashion-product-images-small\"\n\nMore Information needed\n\nData was obtained from here"
] |
caf62a8694ff3c9fa6523dc1f74d446569fded46 | 

 | Valentingmz/Repositor | [
"region:us"
] | 2022-11-01T20:28:07+00:00 | {} | 2022-11-01T20:39:51+00:00 | [] | [] | TAGS
#region-us
| !Luz_life_in_the_process_of_flooding_a_coastal_city_photorealist_04bf00d0-URL
!Luz_life_in_the_process_of_flooding_a_coastal_city_photorealist_405f00d5-URL
!Luz_life_in_the_process_of_flooding_a_coastal_city_photorealist_04bf00d0-URL | [] | [
"TAGS\n#region-us \n"
] |
a311ec1ad64e5e5a005e8759b8dde88acecc42eb | # AutoTrain Dataset for project: mysheet
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mysheet.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "The term \u201cpseudocode\u201d refers to writing code in a humanly understandable language such as English, and breaking it down to its core concepts.",
"question": "What is pseudocode?",
"answers.text": [
"Pseudocode is breaking down your code in English."
],
"answers.answer_start": [
33
]
},
{
"context": "Python is an interactive programming language designed for API and Machine Learning use.",
"question": "What is Python?",
"answers.text": [
"Python is an interactive programming language."
],
"answers.answer_start": [
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3 |
| valid | 1 |
| LiveEvil/autotrain-data-mysheet | [
"language:en",
"region:us"
] | 2022-11-01T20:55:23+00:00 | {"language": ["en"]} | 2022-11-01T20:55:52+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| AutoTrain Dataset for project: mysheet
======================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project mysheet.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
36cf8a781bf9396d6b7e7fb536ef635571fbec77 | This is a ParaModeler, for rating hook/grabbers of an introduction paragraph. | LiveEvil/EsCheck-Paragraph | [
"license:openrail",
"region:us"
] | 2022-11-01T21:33:39+00:00 | {"license": "openrail"} | 2022-11-02T15:15:44+00:00 | [] | [] | TAGS
#license-openrail #region-us
| This is a ParaModeler, for rating hook/grabbers of an introduction paragraph. | [] | [
"TAGS\n#license-openrail #region-us \n"
] |
863faca5cd61e14147241f86fb9ffcce538cb800 | To access an image use the following
Bucket URL: https://d26smi9133w0oo.cloudfront.net/
example:
https://d26smi9133w0oo.cloudfront.net/room-7/1670520485-CZk4C72xBr5wPfTpwDAnG6-7648_7008-a-chicken-breaking-through-a-mirrornnotn.webp
**Bucket URL/key**
SQLite
https://huggingface.co/datasets/huggingface-projects/sd-multiplayer-data/blob/main/rooms_data.db
```bash
sqlite> PRAGMA table_info(rooms_data);
0|id|INTEGER|1||1
1|room_id|TEXT|1||0
2|uuid|TEXT|1||0
3|x|INTEGER|1||0
4|y|INTEGER|1||0
5|prompt|TEXT|1||0
6|time|DATETIME|1||0
7|key|TEXT|1||0
$: sqlite3 rooms_data.db
SELECT * FROM rooms_data WHERE room_id = 'room-40';
```
JSON example
https://huggingface.co/datasets/huggingface-projects/sd-multiplayer-data/blob/main/room-39.json
```json
[
{
"id": 160103269,
"room_id": "room-7",
"uuid": "CZk4C72xBr5wPfTpwDAnG6",
"x": 7648,
"y": 7008,
"prompt": "7648_7008 a chicken breaking through a mirrornnotn webp",
"time": "2022-12-08T17:28:06+00:00",
"key": "room-7/1670520485-CZk4C72xBr5wPfTpwDAnG6-7648_7008-a-chicken-breaking-through-a-mirrornnotn.webp"
}
]
| huggingface-projects/sd-multiplayer-data | [
"region:us"
] | 2022-11-02T00:57:18+00:00 | {} | 2022-12-13T14:37:41+00:00 | [] | [] | TAGS
#region-us
| To access an image use the following
Bucket URL: URL
example:
URL
Bucket URL/key
SQLite
URL
JSON example
URL
'''json
[
{
"id": 160103269,
"room_id": "room-7",
"uuid": "CZk4C72xBr5wPfTpwDAnG6",
"x": 7648,
"y": 7008,
"prompt": "7648_7008 a chicken breaking through a mirrornnotn webp",
"time": "2022-12-08T17:28:06+00:00",
"key": "room-7/1670520485-CZk4C72xBr5wPfTpwDAnG6-7648_7008-URL"
}
]
| [] | [
"TAGS\n#region-us \n"
] |
7c394b430826ee4b382c888e833699dffaea5423 | # Dataset Card for "crows_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | henryscheible/crows_pairs | [
"region:us"
] | 2022-11-02T02:25:49+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "test", "num_bytes": 146765.59151193633, "num_examples": 302}, {"name": "train", "num_bytes": 586090.4084880636, "num_examples": 1206}], "download_size": 113445, "dataset_size": 732856.0}} | 2022-11-02T02:25:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "crows_pairs"
More Information needed | [
"# Dataset Card for \"crows_pairs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"crows_pairs\"\n\nMore Information needed"
] |
76bf143f6cf6aebfb72f24bc3b9e2d2b5b0a0899 | # `chinese_clean_passages_80m`
包含**8千余万**(88328203)个**纯净**中文段落,不包含任何字母、数字。\
Containing more than **80 million pure \& clean** Chinese passages, without any letters/digits/special tokens.
文本长度大部分介于50\~200个汉字之间。\
The passage length is approximately 50\~200 Chinese characters.
通过`datasets.load_dataset()`下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\
Downloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)
```
>>>
passage_dataset = load_dataset('beyond/chinese_clean_passages_80m')
<<<
Downloading data: 100%|█| 341M/341M [00:06<00:00, 52.0MB
Downloading data: 100%|█| 342M/342M [00:06<00:00, 54.4MB
Downloading data: 100%|█| 341M/341M [00:06<00:00, 49.1MB
Downloading data: 100%|█| 341M/341M [00:14<00:00, 23.5MB
Downloading data: 100%|█| 341M/341M [00:10<00:00, 33.6MB
Downloading data: 100%|█| 342M/342M [00:07<00:00, 43.1MB
...(38 data shards)
```
本数据集被用于训练[GENIUS模型中文版](https://huggingface.co/spaces/beyond/genius),如果这个数据集对您的研究有帮助,请引用以下论文。
This dataset is created for the pre-training of [GENIUS model](https://huggingface.co/spaces/beyond/genius), if you find this dataset useful, please cite our paper.
```
@article{guo2022genius,
title={GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation},
author={Guo, Biyang and Gong, Yeyun and Shen, Yelong and Han, Songqiao and Huang, Hailiang and Duan, Nan and Chen, Weizhu},
journal={arXiv preprint arXiv:2211.10330},
year={2022}
}
```
---
Acknowledgment:\
数据是基于[CLUE中文预训练语料集](https://github.com/CLUEbenchmark/CLUE)进行处理、过滤得到的。\
This dataset is processed/filtered from the [CLUE pre-training corpus](https://github.com/CLUEbenchmark/CLUE).
原始数据集引用:
```
@misc{bright_xu_2019_3402023,
author = {Bright Xu},
title = {NLP Chinese Corpus: Large Scale Chinese Corpus for NLP },
month = sep,
year = 2019,
doi = {10.5281/zenodo.3402023},
version = {1.0},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.3402023}
}
```
| beyond/chinese_clean_passages_80m | [
"region:us"
] | 2022-11-02T02:53:49+00:00 | {"dataset_info": {"features": [{"name": "passage", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18979214734, "num_examples": 88328203}], "download_size": 1025261393, "dataset_size": 18979214734}} | 2022-12-06T07:09:20+00:00 | [] | [] | TAGS
#region-us
| # 'chinese_clean_passages_80m'
包含8千余万(88328203)个纯净中文段落,不包含任何字母、数字。\
Containing more than 80 million pure \& clean Chinese passages, without any letters/digits/special tokens.
文本长度大部分介于50\~200个汉字之间。\
The passage length is approximately 50\~200 Chinese characters.
通过'datasets.load_dataset()'下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\
Downloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)
本数据集被用于训练GENIUS模型中文版,如果这个数据集对您的研究有帮助,请引用以下论文。
This dataset is created for the pre-training of GENIUS model, if you find this dataset useful, please cite our paper.
---
Acknowledgment:\
数据是基于CLUE中文预训练语料集进行处理、过滤得到的。\
This dataset is processed/filtered from the CLUE pre-training corpus.
原始数据集引用:
| [
"# 'chinese_clean_passages_80m'\n\n包含8千余万(88328203)个纯净中文段落,不包含任何字母、数字。\\\nContaining more than 80 million pure \\& clean Chinese passages, without any letters/digits/special tokens.\n\n文本长度大部分介于50\\~200个汉字之间。\\\nThe passage length is approximately 50\\~200 Chinese characters.\n\n通过'datasets.load_dataset()'下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\\\nDownloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)\n\n\n\n本数据集被用于训练GENIUS模型中文版,如果这个数据集对您的研究有帮助,请引用以下论文。\nThis dataset is created for the pre-training of GENIUS model, if you find this dataset useful, please cite our paper. \n\n\n\n---\nAcknowledgment:\\\n数据是基于CLUE中文预训练语料集进行处理、过滤得到的。\\\nThis dataset is processed/filtered from the CLUE pre-training corpus.\n\n原始数据集引用:"
] | [
"TAGS\n#region-us \n",
"# 'chinese_clean_passages_80m'\n\n包含8千余万(88328203)个纯净中文段落,不包含任何字母、数字。\\\nContaining more than 80 million pure \\& clean Chinese passages, without any letters/digits/special tokens.\n\n文本长度大部分介于50\\~200个汉字之间。\\\nThe passage length is approximately 50\\~200 Chinese characters.\n\n通过'datasets.load_dataset()'下载数据,会产生38个大小约340M的数据包,共约12GB,所以请确保有足够空间。\\\nDownloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)\n\n\n\n本数据集被用于训练GENIUS模型中文版,如果这个数据集对您的研究有帮助,请引用以下论文。\nThis dataset is created for the pre-training of GENIUS model, if you find this dataset useful, please cite our paper. \n\n\n\n---\nAcknowledgment:\\\n数据是基于CLUE中文预训练语料集进行处理、过滤得到的。\\\nThis dataset is processed/filtered from the CLUE pre-training corpus.\n\n原始数据集引用:"
] |
0701ea3fa42db65b7237cab8e916a35659c5b845 | # Dataset Card for "animal-crossing-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pseeej/animal-crossing-data | [
"region:us"
] | 2022-11-02T03:30:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7209776.0, "num_examples": 389}], "download_size": 7181848, "dataset_size": 7209776.0}} | 2022-11-02T03:31:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "animal-crossing-data"
More Information needed | [
"# Dataset Card for \"animal-crossing-data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"animal-crossing-data\"\n\nMore Information needed"
] |
6b06220d4057c9f974c693567958ebc32f764d89 | # Dataset Card for "onset-drums_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gary109/onset-drums_corpora_parliament_processed | [
"region:us"
] | 2022-11-02T03:50:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43947, "num_examples": 283}], "download_size": 14691, "dataset_size": 43947}} | 2022-11-22T07:42:46+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "onset-drums_corpora_parliament_processed"
More Information needed | [
"# Dataset Card for \"onset-drums_corpora_parliament_processed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"onset-drums_corpora_parliament_processed\"\n\nMore Information needed"
] |
0179bb2c085b52b01ca23991c7581c136b76e0e6 | # Dataset Card for "goodreads_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dhmeltzer/goodreads_test | [
"region:us"
] | 2022-11-02T04:14:19+00:00 | {"dataset_info": {"features": [{"name": "review_text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1010427121, "num_examples": 478033}], "download_size": 496736771, "dataset_size": 1010427121}} | 2022-11-02T04:14:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "goodreads_test"
More Information needed | [
"# Dataset Card for \"goodreads_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"goodreads_test\"\n\nMore Information needed"
] |
dfefc099c175c50fa26da17038a2970fc6808171 | # Dataset Card for "goodreads_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dhmeltzer/goodreads_train | [
"region:us"
] | 2022-11-02T04:14:58+00:00 | {"dataset_info": {"features": [{"name": "rating", "dtype": "int64"}, {"name": "review_text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1893978314, "num_examples": 900000}], "download_size": 928071460, "dataset_size": 1893978314}} | 2022-11-02T04:16:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "goodreads_train"
More Information needed | [
"# Dataset Card for \"goodreads_train\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"goodreads_train\"\n\nMore Information needed"
] |
6fd649a5748873d108c8a785a38a55ddca291260 | # Dataset Card for "nymemes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | annabelng/nymemes | [
"region:us"
] | 2022-11-02T07:59:23+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3760740114.362, "num_examples": 32933}], "download_size": 4007130292, "dataset_size": 3760740114.362}} | 2022-11-02T08:02:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nymemes"
More Information needed | [
"# Dataset Card for \"nymemes\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nymemes\"\n\nMore Information needed"
] |
1fafac00f14590feb94984ee7dc1adc861179fc7 | # Dataset Card for "music_genres"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/music_genres | [
"region:us"
] | 2022-11-02T10:01:46+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "song_id", "dtype": "int64"}, {"name": "genre_id", "dtype": "int64"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1978321742.996, "num_examples": 5076}, {"name": "train", "num_bytes": 7844298868.902, "num_examples": 19909}], "download_size": 9793244255, "dataset_size": 9822620611.898}} | 2022-11-02T10:27:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "music_genres"
More Information needed | [
"# Dataset Card for \"music_genres\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"music_genres\"\n\nMore Information needed"
] |
6fd41bb2494326e92dd46a92a1aeff50fbce4fdd |
## About this dataset
The [CAES](http://galvan.usc.es/caes/) [(Parodi, 2015)](https://www.tandfonline.com/doi/full/10.1080/23247797.2015.1084685?cookieSet=1) dataset, also referred as the “Corpus de Aprendices del Español” (CAES), is a collection of texts created by Spanish L2 learners from Spanish learning centres and universities. These students had different learning levels, different backgrounds (11 native languages) and various levels of experience with the language. We used web scraping techniques to download a portion of the full dataset since its current website only provides content filtered by categories that have to be manually selected. The readability level of each text in CAES follows the [Common European Framework of Reference for Languages (CEFR)](https://www.coe.int/en/web/common-european-framework-reference-languages). The [raw version](https://huggingface.co/datasets/lmvasque/caes/blob/main/caes.raw.csv) of this corpus also contains information about the learners and the type of assignments with which they were assigned to create each text.
We have downloaded this dataset from its original [website](https://galvan.usc.es/caes/search) to make it available to the community. If you use this data, please credit the original author and our work as well (see citations below).
## About the splits
We have uploaded two versions of the CAES corpus:
- **caes.raw.csv**: raw data from the website with no further filtering. It includes information about the learners and the type/topic of their assignments.
- **caes.jsonl**: this data is limited to the text samples, the original levels of readability and our standardised category according to these: simple/complex and basic/intermediate/advanced. You can check for more details about these splits in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)"
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
We have extracted the CAES corpus from their [website](https://galvan.usc.es/caes/search). If you use this corpus, please also cite their work as follows:
```
@article{Parodi2015,
author = "Giovanni Parodi",
title = "Corpus de aprendices de español (CAES)",
journal = "Journal of Spanish Language Teaching",
volume = "2",
number = "2",
pages = "194-200",
year = "2015",
publisher = "Routledge",
doi = "10.1080/23247797.2015.1084685",
URL = "https://doi.org/10.1080/23247797.2015.1084685",
eprint = "https://doi.org/10.1080/23247797.2015.1084685"
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). | lmvasque/caes | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-02T10:40:31+00:00 | {"license": "cc-by-4.0"} | 2022-11-11T18:09:24+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
## About this dataset
The CAES (Parodi, 2015) dataset, also referred as the “Corpus de Aprendices del Español” (CAES), is a collection of texts created by Spanish L2 learners from Spanish learning centres and universities. These students had different learning levels, different backgrounds (11 native languages) and various levels of experience with the language. We used web scraping techniques to download a portion of the full dataset since its current website only provides content filtered by categories that have to be manually selected. The readability level of each text in CAES follows the Common European Framework of Reference for Languages (CEFR). The raw version of this corpus also contains information about the learners and the type of assignments with which they were assigned to create each text.
We have downloaded this dataset from its original website to make it available to the community. If you use this data, please credit the original author and our work as well (see citations below).
## About the splits
We have uploaded two versions of the CAES corpus:
- URL: raw data from the website with no further filtering. It includes information about the learners and the type/topic of their assignments.
- URL: this data is limited to the text samples, the original levels of readability and our standardised category according to these: simple/complex and basic/intermediate/advanced. You can check for more details about these splits in our paper.
If you use our splits in your research, please cite our work: "A Benchmark for Neural Readability Assessment of Texts in Spanish"
We have extracted the CAES corpus from their website. If you use this corpus, please also cite their work as follows:
You can also find more details about the project in our GitHub. | [
"## About this dataset\nThe CAES (Parodi, 2015) dataset, also referred as the “Corpus de Aprendices del Español” (CAES), is a collection of texts created by Spanish L2 learners from Spanish learning centres and universities. These students had different learning levels, different backgrounds (11 native languages) and various levels of experience with the language. We used web scraping techniques to download a portion of the full dataset since its current website only provides content filtered by categories that have to be manually selected. The readability level of each text in CAES follows the Common European Framework of Reference for Languages (CEFR). The raw version of this corpus also contains information about the learners and the type of assignments with which they were assigned to create each text.\n\nWe have downloaded this dataset from its original website to make it available to the community. If you use this data, please credit the original author and our work as well (see citations below).",
"## About the splits\nWe have uploaded two versions of the CAES corpus:\n- URL: raw data from the website with no further filtering. It includes information about the learners and the type/topic of their assignments.\n- URL: this data is limited to the text samples, the original levels of readability and our standardised category according to these: simple/complex and basic/intermediate/advanced. You can check for more details about these splits in our paper.\n\n\n\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\"\n\n\n\nWe have extracted the CAES corpus from their website. If you use this corpus, please also cite their work as follows:\n\n\nYou can also find more details about the project in our GitHub."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## About this dataset\nThe CAES (Parodi, 2015) dataset, also referred as the “Corpus de Aprendices del Español” (CAES), is a collection of texts created by Spanish L2 learners from Spanish learning centres and universities. These students had different learning levels, different backgrounds (11 native languages) and various levels of experience with the language. We used web scraping techniques to download a portion of the full dataset since its current website only provides content filtered by categories that have to be manually selected. The readability level of each text in CAES follows the Common European Framework of Reference for Languages (CEFR). The raw version of this corpus also contains information about the learners and the type of assignments with which they were assigned to create each text.\n\nWe have downloaded this dataset from its original website to make it available to the community. If you use this data, please credit the original author and our work as well (see citations below).",
"## About the splits\nWe have uploaded two versions of the CAES corpus:\n- URL: raw data from the website with no further filtering. It includes information about the learners and the type/topic of their assignments.\n- URL: this data is limited to the text samples, the original levels of readability and our standardised category according to these: simple/complex and basic/intermediate/advanced. You can check for more details about these splits in our paper.\n\n\n\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\"\n\n\n\nWe have extracted the CAES corpus from their website. If you use this corpus, please also cite their work as follows:\n\n\nYou can also find more details about the project in our GitHub."
] |
189a95069a1544141fd9c21f638b979b106460f1 | ## About this dataset
The dataset Coh-Metrix-Esp (Cuentos) [(Quispesaravia et al., 2016)](https://aclanthology.org/L16-1745/) is a collection of 100 documents consisting of 50 children fables (“simple” texts) and 50 stories for adults (“complex” texts) scrapped from the web. If you use this data, please credit the original website and our work as well (see citations below).
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
#### Coh-Metrix-Esp (Cuentos)
```
@inproceedings{quispesaravia-etal-2016-coh,
title = "{C}oh-{M}etrix-{E}sp: A Complexity Analysis Tool for Documents Written in {S}panish",
author = "Quispesaravia, Andre and
Perez, Walter and
Sobrevilla Cabezudo, Marco and
Alva-Manchego, Fernando",
booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)",
month = may,
year = "2016",
address = "Portoro{\v{z}}, Slovenia",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L16-1745",
pages = "4694--4698",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). | lmvasque/coh-metrix-esp | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-02T10:43:02+00:00 | {"license": "cc-by-sa-4.0"} | 2022-11-11T17:44:04+00:00 | [] | [] | TAGS
#license-cc-by-sa-4.0 #region-us
| ## About this dataset
The dataset Coh-Metrix-Esp (Cuentos) (Quispesaravia et al., 2016) is a collection of 100 documents consisting of 50 children fables (“simple” texts) and 50 stories for adults (“complex” texts) scrapped from the web. If you use this data, please credit the original website and our work as well (see citations below).
If you use our splits in your research, please cite our work: "A Benchmark for Neural Readability Assessment of Texts in Spanish".
#### Coh-Metrix-Esp (Cuentos)
You can also find more details about the project in our GitHub. | [
"## About this dataset\n\nThe dataset Coh-Metrix-Esp (Cuentos) (Quispesaravia et al., 2016) is a collection of 100 documents consisting of 50 children fables (“simple” texts) and 50 stories for adults (“complex” texts) scrapped from the web. If you use this data, please credit the original website and our work as well (see citations below).\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\".",
"#### Coh-Metrix-Esp (Cuentos)\n\n\nYou can also find more details about the project in our GitHub."
] | [
"TAGS\n#license-cc-by-sa-4.0 #region-us \n",
"## About this dataset\n\nThe dataset Coh-Metrix-Esp (Cuentos) (Quispesaravia et al., 2016) is a collection of 100 documents consisting of 50 children fables (“simple” texts) and 50 stories for adults (“complex” texts) scrapped from the web. If you use this data, please credit the original website and our work as well (see citations below).\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\".",
"#### Coh-Metrix-Esp (Cuentos)\n\n\nYou can also find more details about the project in our GitHub."
] |
9a9ece7cc079929fb0902994f71e5c63f4284e11 | ## About this dataset
This dataset was collected from [HablaCultura.com](https://hablacultura.com/) a website with resources for Spanish students, labeled by instructors following the [Common European Framework of Reference for Languages (CEFR)](https://www.coe.int/en/web/common-european-framework-reference-languages). We have scraped the freely available articles from its original [website](https://hablacultura.com/) to make it available to the community. If you use this data, please credit the original [website](https://hablacultura.com/) and our work as well.
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). | lmvasque/hablacultura | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-02T10:44:43+00:00 | {"license": "cc-by-4.0"} | 2022-11-11T17:42:13+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| ## About this dataset
This dataset was collected from URL a website with resources for Spanish students, labeled by instructors following the Common European Framework of Reference for Languages (CEFR). We have scraped the freely available articles from its original website to make it available to the community. If you use this data, please credit the original website and our work as well.
If you use our splits in your research, please cite our work: "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can also find more details about the project in our GitHub. | [
"## About this dataset\n\nThis dataset was collected from URL a website with resources for Spanish students, labeled by instructors following the Common European Framework of Reference for Languages (CEFR). We have scraped the freely available articles from its original website to make it available to the community. If you use this data, please credit the original website and our work as well.\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\". \n\n\nYou can also find more details about the project in our GitHub."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## About this dataset\n\nThis dataset was collected from URL a website with resources for Spanish students, labeled by instructors following the Common European Framework of Reference for Languages (CEFR). We have scraped the freely available articles from its original website to make it available to the community. If you use this data, please credit the original website and our work as well.\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\". \n\n\nYou can also find more details about the project in our GitHub."
] |
b8ec1babb569f217a0248fb05f8323539bf90d96 |
## About this dataset
This dataset was collected from [kwiziq.com](https://www.kwiziq.com/), a website dedicated to aid Spanish learning through automated methods. It also provides articles in different CEFR-based levels. We have scraped the freely available articles from its original [website](https://www.kwiziq.com/) to make it available to the community. If you use this data, please credit the original [website]((https://www.kwiziq.com/) and our work as well.
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
| lmvasque/kwiziq | [
"license:cc-by-4.0",
"region:us"
] | 2022-11-02T10:45:55+00:00 | {"license": "cc-by-4.0"} | 2022-11-11T17:40:47+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
## About this dataset
This dataset was collected from URL, a website dedicated to aid Spanish learning through automated methods. It also provides articles in different CEFR-based levels. We have scraped the freely available articles from its original website to make it available to the community. If you use this data, please credit the original website and our work as well.
If you use our splits in your research, please cite our work: "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can also find more details about the project in our GitHub.
| [
"## About this dataset\n\nThis dataset was collected from URL, a website dedicated to aid Spanish learning through automated methods. It also provides articles in different CEFR-based levels. We have scraped the freely available articles from its original website to make it available to the community. If you use this data, please credit the original website and our work as well.\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\". \n\n\nYou can also find more details about the project in our GitHub."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## About this dataset\n\nThis dataset was collected from URL, a website dedicated to aid Spanish learning through automated methods. It also provides articles in different CEFR-based levels. We have scraped the freely available articles from its original website to make it available to the community. If you use this data, please credit the original website and our work as well.\n\nIf you use our splits in your research, please cite our work: \"A Benchmark for Neural Readability Assessment of Texts in Spanish\". \n\n\nYou can also find more details about the project in our GitHub."
] |
9361d38c024c137755d8cefe9be826dc16be4885 | # Dataset Card for "audio-test-push"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/audio-test-push | [
"region:us"
] | 2022-11-02T11:36:14+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "song_id", "dtype": "int64"}, {"name": "genre_id", "dtype": "int64"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3994705.0, "num_examples": 10}, {"name": "train", "num_bytes": 3738678.0, "num_examples": 10}], "download_size": 7730848, "dataset_size": 7733383.0}} | 2022-11-02T11:36:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "audio-test-push"
More Information needed | [
"# Dataset Card for \"audio-test-push\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"audio-test-push\"\n\nMore Information needed"
] |
a5e76a325594cc02dfb1cba47f07c497ab01bf60 | # Dataset Card for "muld_OpenSubtitles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ghomasHudson/muld_OpenSubtitles | [
"region:us"
] | 2022-11-02T11:55:18+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 176793874, "num_examples": 1385}, {"name": "train", "num_bytes": 1389584660, "num_examples": 27749}], "download_size": 967763941, "dataset_size": 1566378534}} | 2022-11-02T11:56:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "muld_OpenSubtitles"
More Information needed | [
"# Dataset Card for \"muld_OpenSubtitles\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"muld_OpenSubtitles\"\n\nMore Information needed"
] |
282a412b73478e5e843367c5ece3d3f8660f05b0 | # Dataset Card for "muld_AO3_Style_Change_Detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ghomasHudson/muld_AO3_Style_Change_Detection | [
"region:us"
] | 2022-11-02T12:06:13+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 282915635, "num_examples": 2352}, {"name": "train", "num_bytes": 762370660, "num_examples": 6354}, {"name": "validation", "num_bytes": 83699681, "num_examples": 705}], "download_size": 677671983, "dataset_size": 1128985976}} | 2022-11-02T12:06:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "muld_AO3_Style_Change_Detection"
More Information needed | [
"# Dataset Card for \"muld_AO3_Style_Change_Detection\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"muld_AO3_Style_Change_Detection\"\n\nMore Information needed"
] |
c50eef2470554f8a1271a921d55aa7dc34420738 | # Dataset Card for "processed_multiscale_rt_critics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | frankier/processed_multiscale_rt_critics | [
"region:us"
] | 2022-11-02T12:15:25+00:00 | {"dataset_info": {"features": [{"name": "movie_title", "dtype": "string"}, {"name": "publisher_name", "dtype": "string"}, {"name": "critic_name", "dtype": "string"}, {"name": "review_content", "dtype": "string"}, {"name": "review_score", "dtype": "string"}, {"name": "grade_type", "dtype": "string"}, {"name": "orig_num", "dtype": "float32"}, {"name": "orig_denom", "dtype": "float32"}, {"name": "includes_zero", "dtype": "bool"}, {"name": "label", "dtype": "uint8"}, {"name": "scale_points", "dtype": "uint8"}, {"name": "multiplier", "dtype": "uint8"}, {"name": "group_id", "dtype": "uint32"}], "splits": [{"name": "train", "num_bytes": 117244343, "num_examples": 540256}, {"name": "test", "num_bytes": 28517095, "num_examples": 131563}], "download_size": 0, "dataset_size": 145761438}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-10-03T16:16:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_multiscale_rt_critics"
More Information needed | [
"# Dataset Card for \"processed_multiscale_rt_critics\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_multiscale_rt_critics\"\n\nMore Information needed"
] |
63b6d26bb53a87c2b8ea9c9428bee6ab7a7532ef | # Dataset Card for "muld_NarrativeQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ghomasHudson/muld_NarrativeQA | [
"region:us"
] | 2022-11-02T12:17:00+00:00 | {"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 3435452065, "num_examples": 10143}, {"name": "train", "num_bytes": 11253796383, "num_examples": 32747}, {"name": "validation", "num_bytes": 1176625993, "num_examples": 3373}], "download_size": 8819172017, "dataset_size": 15865874441}} | 2022-11-02T12:24:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "muld_NarrativeQA"
More Information needed | [
"# Dataset Card for \"muld_NarrativeQA\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"muld_NarrativeQA\"\n\nMore Information needed"
] |
8df0b33afd830cd72656e23c6b1cedec2b285b37 |
# Dataset Card for GEM/TaTA
## Dataset Description
- **Homepage:** https://github.com/google-research/url-nlp
- **Repository:** https://github.com/google-research/url-nlp
- **Paper:** https://arxiv.org/abs/2211.00142
- **Leaderboard:** https://github.com/google-research/url-nlp
- **Point of Contact:** Sebastian Ruder
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/TaTA).
### Dataset Summary
Existing data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian).
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/TaTA')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/TaTA).
#### website
[Github](https://github.com/google-research/url-nlp)
#### paper
[ArXiv](https://arxiv.org/abs/2211.00142)
#### authors
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research/url-nlp)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research/url-nlp)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ArXiv](https://arxiv.org/abs/2211.00142)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@misc{gehrmann2022TaTA,
Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera},
Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages},
Year = {2022},
Eprint = {arXiv:2211.00142},
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Sebastian Ruder
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Github](https://github.com/google-research/url-nlp)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Portuguese`, `Arabic`, `French`, `Hausa`, `Swahili (macrolanguage)`, `Igbo`, `Yoruba`, `Russian`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The language is taken from reports by the demographic and health surveys program.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Summarize key information from a table in a single sentence.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Google Research
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google Research
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Sebastian Gehrmann (Google Research)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `example_id`: The ID of the example. Each ID (e.g., `AB20-ar-1`) consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document.
- `title`: The title of the table.
- `unit_of_measure`: A description of the numerical value of the data. E.g., percentage of households with clean water.
- `chart_type`: The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other.
- `was_translated`: Whether the table was transcribed in the original language of the report or translated.
- `table_data`: The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell.
- `table_text`: The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table.
- `linearized_input`: A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9).
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure includes all available information for the infographics on which the dataset is based.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
Annotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"example_id": "FR346-en-39",
"title": "Trends in early childhood mortality rates",
"unit_of_measure": "Deaths per 1,000 live births for the 5-year period before the survey",
"chart_type": "Line chart",
"was_translated": "False",
"table_data": "[[\"\", \"Child mortality\", \"Neonatal mortality\", \"Infant mortality\", \"Under-5 mortality\"], [\"1990 JPFHS\", 5, 21, 34, 39], [\"1997 JPFHS\", 6, 19, 29, 34], [\"2002 JPFHS\", 5, 16, 22, 27], [\"2007 JPFHS\", 2, 14, 19, 21], [\"2009 JPFHS\", 5, 15, 23, 28], [\"2012 JPFHS\", 4, 14, 17, 21], [\"2017-18 JPFHS\", 3, 11, 17, 19]]",
"table_text": [
"neonatal, infant, child, and under-5 mortality rates for the 5 years preceding each of seven JPFHS surveys (1990 to 2017-18).",
"Under-5 mortality declined by half over the period, from 39 to 19 deaths per 1,000 live births.",
"The decline in mortality was much greater between the 1990 and 2007 surveys than in the most recent period.",
"Between 2012 and 2017-18, under-5 mortality decreased only modestly, from 21 to 19 deaths per 1,000 live births, and infant mortality remained stable at 17 deaths per 1,000 births."
],
"linearized_input": "Trends in early childhood mortality rates | Deaths per 1,000 live births for the 5-year period before the survey | (Child mortality, 1990 JPFHS, 5) (Neonatal mortality, 1990 JPFHS, 21) (Infant mortality, 1990 JPFHS, 34) (Under-5 mortality, 1990 JPFHS, 39) (Child mortality, 1997 JPFHS, 6) (Neonatal mortality, 1997 JPFHS, 19) (Infant mortality, 1997 JPFHS, 29) (Under-5 mortality, 1997 JPFHS, 34) (Child mortality, 2002 JPFHS, 5) (Neonatal mortality, 2002 JPFHS, 16) (Infant mortality, 2002 JPFHS, 22) (Under-5 mortality, 2002 JPFHS, 27) (Child mortality, 2007 JPFHS, 2) (Neonatal mortality, 2007 JPFHS, 14) (Infant mortality, 2007 JPFHS, 19) (Under-5 mortality, 2007 JPFHS, 21) (Child mortality, 2009 JPFHS, 5) (Neonatal mortality, 2009 JPFHS, 15) (Infant mortality, 2009 JPFHS, 23) (Under-5 mortality, 2009 JPFHS, 28) (Child mortality, 2012 JPFHS, 4) (Neonatal mortality, 2012 JPFHS, 14) (Infant mortality, 2012 JPFHS, 17) (Under-5 mortality, 2012 JPFHS, 21) (Child mortality, 2017-18 JPFHS, 3) (Neonatal mortality, 2017-18 JPFHS, 11) (Infant mortality, 2017-18 JPFHS, 17) (Under-5 mortality, 2017-18 JPFHS, 19)"
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- `Train`: Training set, includes examples with 0 or more references.
- `Validation`: Validation set, includes examples with 3 or more references.
- `Test`: Test set, includes examples with 3 or more references.
- `Ru`: Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits).
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references.
From the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables.
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
There are tables without references, without values, and others that are very large. The dataset is distributed as-is, but the paper describes multiple strategies to deal with data issues.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
There is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
More languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
reasoning, verbalization, content planning
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
The background section of the [paper](https://arxiv.org/abs/2211.00142) provides a list of related datasets.
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
- `data-to-text`: Term that refers to NLP tasks in which the input is structured information and the output is natural language.
## Previous Results
### Previous Results
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
`StATA`: A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them.
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The creators used a human evaluation that measured [attribution](https://arxiv.org/abs/2112.12870) and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The communicative goal is to describe a table in a single sentence.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The language was produced by USAID as part of the Demographic and Health Surveys program (https://dhsprogram.com/).
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The topics are related to fertility, family planning, maternal and child health, gender, and nutrition.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
11<n<50
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
Professional annotator who is a fluent speaker of the respective language
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
0
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
1
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
yes
#### Which Annotation Service
<!-- info: Which annotation services were used? -->
<!-- scope: periscope -->
`other`
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The additional annotations are for system outputs and references and serve to develop metrics for this task.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Ratings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
In addition to data-to-text generation, the dataset can be used for translation or multimodal research.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The DHS program only publishes aggregate survey information and thus, no personal information is included.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
The dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
While tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged.
| GEM/TaTA | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:yes",
"size_categories:unknown",
"source_datasets:original",
"language:ar",
"language:en",
"language:fr",
"language:ha",
"language:ig",
"language:pt",
"language:ru",
"language:sw",
"language:yo",
"license:cc-by-sa-4.0",
"data-to-text",
"arxiv:2211.00142",
"arxiv:2112.12870",
"region:us"
] | 2022-11-02T13:21:53+00:00 | {"annotations_creators": ["none"], "language_creators": ["unknown"], "language": ["ar", "en", "fr", "ha", "ig", "pt", "ru", "sw", "yo"], "license": "cc-by-sa-4.0", "multilinguality": [true], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["table-to-text"], "task_ids": [], "pretty_name": "TaTA", "tags": ["data-to-text"], "dataset_info": {"features": [{"name": "gem_id", "dtype": "string"}, {"name": "example_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "unit_of_measure", "dtype": "string"}, {"name": "chart_type", "dtype": "string"}, {"name": "was_translated", "dtype": "string"}, {"name": "table_data", "dtype": "string"}, {"name": "linearized_input", "dtype": "string"}, {"name": "table_text", "sequence": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "ru", "num_bytes": 308435, "num_examples": 210}, {"name": "test", "num_bytes": 1691383, "num_examples": 763}, {"name": "train", "num_bytes": 10019272, "num_examples": 6962}, {"name": "validation", "num_bytes": 1598442, "num_examples": 754}], "download_size": 18543506, "dataset_size": 13617532}} | 2022-11-03T14:23:59+00:00 | [
"2211.00142",
"2112.12870"
] | [
"ar",
"en",
"fr",
"ha",
"ig",
"pt",
"ru",
"sw",
"yo"
] | TAGS
#task_categories-table-to-text #annotations_creators-none #language_creators-unknown #multilinguality-yes #size_categories-unknown #source_datasets-original #language-Arabic #language-English #language-French #language-Hausa #language-Igbo #language-Portuguese #language-Russian #language-Swahili (macrolanguage) #language-Yoruba #license-cc-by-sa-4.0 #data-to-text #arxiv-2211.00142 #arxiv-2112.12870 #region-us
|
# Dataset Card for GEM/TaTA
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: Sebastian Ruder
### Link to Main Data Card
You can find the main data card on the GEM Website.
### Dataset Summary
Existing data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian).
You can load the dataset via:
The data loader can be found here.
#### website
Github
#### paper
ArXiv
#### authors
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
Github
#### Download
Github
#### Paper
ArXiv
#### BibTex
#### Contact Name
Sebastian Ruder
#### Contact Email
ruder@URL
#### Has a Leaderboard?
yes
#### Leaderboard Link
Github
#### Leaderboard Details
The paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard.
### Languages and Intended Use
#### Multilingual?
yes
#### Covered Languages
'English', 'Portuguese', 'Arabic', 'French', 'Hausa', 'Swahili (macrolanguage)', 'Igbo', 'Yoruba', 'Russian'
#### Whose Language?
The language is taken from reports by the demographic and health surveys program.
#### License
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
The dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models.
#### Primary Task
Data-to-Text
#### Communicative Goal
Summarize key information from a table in a single sentence.
### Credit
#### Curation Organization Type(s)
'industry'
#### Curation Organization(s)
Google Research
#### Dataset Creators
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
#### Funding
Google Research
#### Who added the Dataset to GEM?
Sebastian Gehrmann (Google Research)
### Dataset Structure
#### Data Fields
- 'example_id': The ID of the example. Each ID (e.g., 'AB20-ar-1') consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document.
- 'title': The title of the table.
- 'unit_of_measure': A description of the numerical value of the data. E.g., percentage of households with clean water.
- 'chart_type': The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other.
- 'was_translated': Whether the table was transcribed in the original language of the report or translated.
- 'table_data': The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell.
- 'table_text': The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table.
- 'linearized_input': A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9).
#### Reason for Structure
The structure includes all available information for the infographics on which the dataset is based.
#### How were labels chosen?
Annotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted.
#### Example Instance
#### Data Splits
- 'Train': Training set, includes examples with 0 or more references.
- 'Validation': Validation set, includes examples with 3 or more references.
- 'Test': Test set, includes examples with 3 or more references.
- 'Ru': Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits).
#### Splitting Criteria
The same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references.
From the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables.
####
There are tables without references, without values, and others that are very large. The dataset is distributed as-is, but the paper describes multiple strategies to deal with data issues.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
There is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models.
#### Similar Datasets
yes
#### Unique Language Coverage
yes
#### Difference from other GEM datasets
More languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents
#### Ability that the Dataset measures
reasoning, verbalization, content planning
### GEM-Specific Curation
#### Modificatied for GEM?
no
#### Additional Splits?
no
### Getting Started with the Task
#### Pointers to Resources
The background section of the paper provides a list of related datasets.
#### Technical Terms
- 'data-to-text': Term that refers to NLP tasks in which the input is structured information and the output is natural language.
## Previous Results
### Previous Results
#### Metrics
'Other: Other Metrics'
#### Other Metrics
'StATA': A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them.
#### Proposed Evaluation
The creators used a human evaluation that measured attribution and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution.
#### Previous results available?
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
The curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging.
#### Communicative Goal
The communicative goal is to describe a table in a single sentence.
#### Sourced from Different Sources
no
### Language Data
#### How was Language Data Obtained?
'Found'
#### Where was it found?
'Single website'
#### Language Producers
The language was produced by USAID as part of the Demographic and Health Surveys program (URL
#### Topics Covered
The topics are related to fertility, family planning, maternal and child health, gender, and nutrition.
#### Data Validation
validated by crowdworker
#### Was Data Filtered?
not filtered
### Structured Annotations
#### Additional Annotations?
expert created
#### Number of Raters
11<n<50
#### Rater Qualifications
Professional annotator who is a fluent speaker of the respective language
#### Raters per Training Example
0
#### Raters per Test Example
1
#### Annotation Service?
yes
#### Which Annotation Service
'other'
#### Annotation Values
The additional annotations are for system outputs and references and serve to develop metrics for this task.
#### Any Quality Control?
validated by data curators
#### Quality Control Details
Ratings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments.
### Consent
#### Any Consent Policy?
yes
#### Other Consented Downstream Use
In addition to data-to-text generation, the dataset can be used for translation or multimodal research.
### Private Identifying Information (PII)
#### Contains PII?
no PII
#### Justification for no PII
The DHS program only publishes aggregate survey information and thus, no personal information is included.
### Maintenance
#### Any Maintenance Plan?
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
yes
#### Details on how Dataset Addresses the Needs
The dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities.
### Discussion of Biases
#### Any Documented Social Biases?
no
#### Are the Language Producers Representative of the Language?
The language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
'open license - commercial use allowed'
#### Copyright Restrictions on the Language Data
'open license - commercial use allowed'
### Known Technical Limitations
#### Technical Limitations
While tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables.
#### Unsuited Applications
The domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged.
| [
"# Dataset Card for GEM/TaTA",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Sebastian Ruder",
"### Link to Main Data Card\n\nYou can find the main data card on the GEM Website.",
"### Dataset Summary\n\nExisting data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian).\n\nYou can load the dataset via:\n\nThe data loader can be found here.",
"#### website\nGithub",
"#### paper\nArXiv",
"#### authors\nSebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera",
"## Dataset Overview",
"### Where to find the Data and its Documentation",
"#### Webpage\n\n\n\nGithub",
"#### Download\n\n\n\nGithub",
"#### Paper\n\n\n\nArXiv",
"#### BibTex",
"#### Contact Name\n\n\n\n\nSebastian Ruder",
"#### Contact Email\n\n\n\nruder@URL",
"#### Has a Leaderboard?\n\n\n\nyes",
"#### Leaderboard Link\n\n\n\nGithub",
"#### Leaderboard Details\n\n\n\nThe paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard.",
"### Languages and Intended Use",
"#### Multilingual?\n\n\n\n\nyes",
"#### Covered Languages\n\n\n\n\n'English', 'Portuguese', 'Arabic', 'French', 'Hausa', 'Swahili (macrolanguage)', 'Igbo', 'Yoruba', 'Russian'",
"#### Whose Language?\n\n\n\nThe language is taken from reports by the demographic and health surveys program.",
"#### License\n\n\n\n\ncc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International",
"#### Intended Use\n\n\n\nThe dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models.",
"#### Primary Task\n\n\n\nData-to-Text",
"#### Communicative Goal\n\n\n\n\nSummarize key information from a table in a single sentence.",
"### Credit",
"#### Curation Organization Type(s)\n\n\n\n'industry'",
"#### Curation Organization(s)\n\n\n\nGoogle Research",
"#### Dataset Creators\n\n\n\nSebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera",
"#### Funding\n\n\n\nGoogle Research",
"#### Who added the Dataset to GEM?\n\n\n\nSebastian Gehrmann (Google Research)",
"### Dataset Structure",
"#### Data Fields\n\n\n\n- 'example_id': The ID of the example. Each ID (e.g., 'AB20-ar-1') consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document.\n- 'title': The title of the table.\n- 'unit_of_measure': A description of the numerical value of the data. E.g., percentage of households with clean water.\n- 'chart_type': The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other.\n- 'was_translated': Whether the table was transcribed in the original language of the report or translated.\n- 'table_data': The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell.\n- 'table_text': The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table.\n- 'linearized_input': A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9).",
"#### Reason for Structure\n\n\n\nThe structure includes all available information for the infographics on which the dataset is based.",
"#### How were labels chosen?\n\n\n\nAnnotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted.",
"#### Example Instance",
"#### Data Splits\n\n\n\n- 'Train': Training set, includes examples with 0 or more references.\n- 'Validation': Validation set, includes examples with 3 or more references.\n- 'Test': Test set, includes examples with 3 or more references.\n- 'Ru': Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits).",
"#### Splitting Criteria\n\n\n\nThe same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references.\nFrom the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables.",
"## Dataset in GEM",
"### Rationale for Inclusion in GEM",
"#### Why is the Dataset in GEM?\n\n\n\nThere is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models.",
"#### Similar Datasets\n\n\n\nyes",
"#### Unique Language Coverage\n\n\n\nyes",
"#### Difference from other GEM datasets\n\n\n\nMore languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents",
"#### Ability that the Dataset measures\n\n\n\nreasoning, verbalization, content planning",
"### GEM-Specific Curation",
"#### Modificatied for GEM?\n\n\n\nno",
"#### Additional Splits?\n\n\n\nno",
"### Getting Started with the Task",
"#### Pointers to Resources\n\n\n\nThe background section of the paper provides a list of related datasets.",
"#### Technical Terms\n\n\n\n- 'data-to-text': Term that refers to NLP tasks in which the input is structured information and the output is natural language.",
"## Previous Results",
"### Previous Results",
"#### Metrics\n\n\n\n'Other: Other Metrics'",
"#### Other Metrics\n\n\n\n'StATA': A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them.",
"#### Proposed Evaluation\n\n\n\nThe creators used a human evaluation that measured attribution and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution.",
"#### Previous results available?\n\n\n\nno",
"## Dataset Curation",
"### Original Curation",
"#### Original Curation Rationale\n\n\n\nThe curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging.",
"#### Communicative Goal\n\n\n\nThe communicative goal is to describe a table in a single sentence.",
"#### Sourced from Different Sources\n\n\n\nno",
"### Language Data",
"#### How was Language Data Obtained?\n\n\n\n'Found'",
"#### Where was it found?\n\n\n\n'Single website'",
"#### Language Producers\n\n\n\nThe language was produced by USAID as part of the Demographic and Health Surveys program (URL",
"#### Topics Covered\n\n\n\nThe topics are related to fertility, family planning, maternal and child health, gender, and nutrition.",
"#### Data Validation\n\n\n\nvalidated by crowdworker",
"#### Was Data Filtered?\n\n\n\nnot filtered",
"### Structured Annotations",
"#### Additional Annotations?\n\n\n\n\nexpert created",
"#### Number of Raters\n\n\n\n11<n<50",
"#### Rater Qualifications\n\n\n\nProfessional annotator who is a fluent speaker of the respective language",
"#### Raters per Training Example\n\n\n\n0",
"#### Raters per Test Example\n\n\n\n1",
"#### Annotation Service?\n\n\n\nyes",
"#### Which Annotation Service\n\n\n\n'other'",
"#### Annotation Values\n\n\n\nThe additional annotations are for system outputs and references and serve to develop metrics for this task.",
"#### Any Quality Control?\n\n\n\nvalidated by data curators",
"#### Quality Control Details\n\n\n\nRatings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments.",
"### Consent",
"#### Any Consent Policy?\n\n\n\nyes",
"#### Other Consented Downstream Use\n\n\n\nIn addition to data-to-text generation, the dataset can be used for translation or multimodal research.",
"### Private Identifying Information (PII)",
"#### Contains PII?\n\n\n\n\nno PII",
"#### Justification for no PII\n\n\n\nThe DHS program only publishes aggregate survey information and thus, no personal information is included.",
"### Maintenance",
"#### Any Maintenance Plan?\n\n\n\nno",
"## Broader Social Context",
"### Previous Work on the Social Impact of the Dataset",
"#### Usage of Models based on the Data\n\n\n\nno",
"### Impact on Under-Served Communities",
"#### Addresses needs of underserved Communities?\n\n\n\nyes",
"#### Details on how Dataset Addresses the Needs\n\n\n\nThe dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities.",
"### Discussion of Biases",
"#### Any Documented Social Biases?\n\n\n\nno",
"#### Are the Language Producers Representative of the Language?\n\n\n\nThe language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented.",
"## Considerations for Using the Data",
"### PII Risks and Liability",
"### Licenses",
"#### Copyright Restrictions on the Dataset\n\n\n\n'open license - commercial use allowed'",
"#### Copyright Restrictions on the Language Data\n\n\n\n'open license - commercial use allowed'",
"### Known Technical Limitations",
"#### Technical Limitations\n\n\n\nWhile tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables.",
"#### Unsuited Applications\n\n\n\nThe domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged."
] | [
"TAGS\n#task_categories-table-to-text #annotations_creators-none #language_creators-unknown #multilinguality-yes #size_categories-unknown #source_datasets-original #language-Arabic #language-English #language-French #language-Hausa #language-Igbo #language-Portuguese #language-Russian #language-Swahili (macrolanguage) #language-Yoruba #license-cc-by-sa-4.0 #data-to-text #arxiv-2211.00142 #arxiv-2112.12870 #region-us \n",
"# Dataset Card for GEM/TaTA",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Sebastian Ruder",
"### Link to Main Data Card\n\nYou can find the main data card on the GEM Website.",
"### Dataset Summary\n\nExisting data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian).\n\nYou can load the dataset via:\n\nThe data loader can be found here.",
"#### website\nGithub",
"#### paper\nArXiv",
"#### authors\nSebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera",
"## Dataset Overview",
"### Where to find the Data and its Documentation",
"#### Webpage\n\n\n\nGithub",
"#### Download\n\n\n\nGithub",
"#### Paper\n\n\n\nArXiv",
"#### BibTex",
"#### Contact Name\n\n\n\n\nSebastian Ruder",
"#### Contact Email\n\n\n\nruder@URL",
"#### Has a Leaderboard?\n\n\n\nyes",
"#### Leaderboard Link\n\n\n\nGithub",
"#### Leaderboard Details\n\n\n\nThe paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard.",
"### Languages and Intended Use",
"#### Multilingual?\n\n\n\n\nyes",
"#### Covered Languages\n\n\n\n\n'English', 'Portuguese', 'Arabic', 'French', 'Hausa', 'Swahili (macrolanguage)', 'Igbo', 'Yoruba', 'Russian'",
"#### Whose Language?\n\n\n\nThe language is taken from reports by the demographic and health surveys program.",
"#### License\n\n\n\n\ncc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International",
"#### Intended Use\n\n\n\nThe dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models.",
"#### Primary Task\n\n\n\nData-to-Text",
"#### Communicative Goal\n\n\n\n\nSummarize key information from a table in a single sentence.",
"### Credit",
"#### Curation Organization Type(s)\n\n\n\n'industry'",
"#### Curation Organization(s)\n\n\n\nGoogle Research",
"#### Dataset Creators\n\n\n\nSebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera",
"#### Funding\n\n\n\nGoogle Research",
"#### Who added the Dataset to GEM?\n\n\n\nSebastian Gehrmann (Google Research)",
"### Dataset Structure",
"#### Data Fields\n\n\n\n- 'example_id': The ID of the example. Each ID (e.g., 'AB20-ar-1') consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document.\n- 'title': The title of the table.\n- 'unit_of_measure': A description of the numerical value of the data. E.g., percentage of households with clean water.\n- 'chart_type': The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other.\n- 'was_translated': Whether the table was transcribed in the original language of the report or translated.\n- 'table_data': The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell.\n- 'table_text': The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table.\n- 'linearized_input': A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9).",
"#### Reason for Structure\n\n\n\nThe structure includes all available information for the infographics on which the dataset is based.",
"#### How were labels chosen?\n\n\n\nAnnotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted.",
"#### Example Instance",
"#### Data Splits\n\n\n\n- 'Train': Training set, includes examples with 0 or more references.\n- 'Validation': Validation set, includes examples with 3 or more references.\n- 'Test': Test set, includes examples with 3 or more references.\n- 'Ru': Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits).",
"#### Splitting Criteria\n\n\n\nThe same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references.\nFrom the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables.",
"## Dataset in GEM",
"### Rationale for Inclusion in GEM",
"#### Why is the Dataset in GEM?\n\n\n\nThere is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models.",
"#### Similar Datasets\n\n\n\nyes",
"#### Unique Language Coverage\n\n\n\nyes",
"#### Difference from other GEM datasets\n\n\n\nMore languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents",
"#### Ability that the Dataset measures\n\n\n\nreasoning, verbalization, content planning",
"### GEM-Specific Curation",
"#### Modificatied for GEM?\n\n\n\nno",
"#### Additional Splits?\n\n\n\nno",
"### Getting Started with the Task",
"#### Pointers to Resources\n\n\n\nThe background section of the paper provides a list of related datasets.",
"#### Technical Terms\n\n\n\n- 'data-to-text': Term that refers to NLP tasks in which the input is structured information and the output is natural language.",
"## Previous Results",
"### Previous Results",
"#### Metrics\n\n\n\n'Other: Other Metrics'",
"#### Other Metrics\n\n\n\n'StATA': A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them.",
"#### Proposed Evaluation\n\n\n\nThe creators used a human evaluation that measured attribution and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution.",
"#### Previous results available?\n\n\n\nno",
"## Dataset Curation",
"### Original Curation",
"#### Original Curation Rationale\n\n\n\nThe curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging.",
"#### Communicative Goal\n\n\n\nThe communicative goal is to describe a table in a single sentence.",
"#### Sourced from Different Sources\n\n\n\nno",
"### Language Data",
"#### How was Language Data Obtained?\n\n\n\n'Found'",
"#### Where was it found?\n\n\n\n'Single website'",
"#### Language Producers\n\n\n\nThe language was produced by USAID as part of the Demographic and Health Surveys program (URL",
"#### Topics Covered\n\n\n\nThe topics are related to fertility, family planning, maternal and child health, gender, and nutrition.",
"#### Data Validation\n\n\n\nvalidated by crowdworker",
"#### Was Data Filtered?\n\n\n\nnot filtered",
"### Structured Annotations",
"#### Additional Annotations?\n\n\n\n\nexpert created",
"#### Number of Raters\n\n\n\n11<n<50",
"#### Rater Qualifications\n\n\n\nProfessional annotator who is a fluent speaker of the respective language",
"#### Raters per Training Example\n\n\n\n0",
"#### Raters per Test Example\n\n\n\n1",
"#### Annotation Service?\n\n\n\nyes",
"#### Which Annotation Service\n\n\n\n'other'",
"#### Annotation Values\n\n\n\nThe additional annotations are for system outputs and references and serve to develop metrics for this task.",
"#### Any Quality Control?\n\n\n\nvalidated by data curators",
"#### Quality Control Details\n\n\n\nRatings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments.",
"### Consent",
"#### Any Consent Policy?\n\n\n\nyes",
"#### Other Consented Downstream Use\n\n\n\nIn addition to data-to-text generation, the dataset can be used for translation or multimodal research.",
"### Private Identifying Information (PII)",
"#### Contains PII?\n\n\n\n\nno PII",
"#### Justification for no PII\n\n\n\nThe DHS program only publishes aggregate survey information and thus, no personal information is included.",
"### Maintenance",
"#### Any Maintenance Plan?\n\n\n\nno",
"## Broader Social Context",
"### Previous Work on the Social Impact of the Dataset",
"#### Usage of Models based on the Data\n\n\n\nno",
"### Impact on Under-Served Communities",
"#### Addresses needs of underserved Communities?\n\n\n\nyes",
"#### Details on how Dataset Addresses the Needs\n\n\n\nThe dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities.",
"### Discussion of Biases",
"#### Any Documented Social Biases?\n\n\n\nno",
"#### Are the Language Producers Representative of the Language?\n\n\n\nThe language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented.",
"## Considerations for Using the Data",
"### PII Risks and Liability",
"### Licenses",
"#### Copyright Restrictions on the Dataset\n\n\n\n'open license - commercial use allowed'",
"#### Copyright Restrictions on the Language Data\n\n\n\n'open license - commercial use allowed'",
"### Known Technical Limitations",
"#### Technical Limitations\n\n\n\nWhile tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables.",
"#### Unsuited Applications\n\n\n\nThe domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged."
] |
048280e285175987c092a96b6149c032fcecc0c7 |
# Introduction
The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.
## Corpus of Business Newswire Texts (business)
The Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.
Statistical data on Named Entities occurring in the corpus:
```
| tokens | phrases
------ | ------ | -------
non NE | 200067 |
PER | 1921 | 982
ORG | 20433 | 10533
LOC | 1501 | 1294
MISC | 2041 | 1662
```
### Reference
> György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)
## Criminal NE corpus (criminal)
The Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.
There are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.
Statistical data on Named Entities occurring in the corpus:
```
| tag-for-meaning | tag-for-tag
------ | --------------- | -----------
non NE | 200067 |
PER | 8101 | 8121
ORG | 8782 | 9480
LOC | 5049 | 5391
MISC | 1917 | 854
```
## Metadata
dataset_info:
- config_name: business
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 4452207
num_examples: 9573
- name: test
num_bytes: 856798
num_examples: 1915
- name: train
num_bytes: 3171931
num_examples: 6701
- name: validation
num_bytes: 423478
num_examples: 957
download_size: 0
dataset_size: 8904414
- config_name: criminal
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 2807970
num_examples: 5375
- name: test
num_bytes: 520959
num_examples: 1089
- name: train
num_bytes: 1989662
num_examples: 3760
- name: validation
num_bytes: 297349
num_examples: 526
download_size: 0
dataset_size: 5615940
| ficsort/SzegedNER | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:hu",
"hungarian",
"szeged",
"ner",
"region:us"
] | 2022-11-02T15:46:47+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["hu"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "SzegedNER", "tags": ["hungarian", "szeged", "ner"]} | 2022-11-02T15:56:22+00:00 | [] | [
"hu"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hungarian #hungarian #szeged #ner #region-us
|
# Introduction
The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.
## Corpus of Business Newswire Texts (business)
The Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.
Statistical data on Named Entities occurring in the corpus:
### Reference
> György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)
## Criminal NE corpus (criminal)
The Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.
There are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.
Statistical data on Named Entities occurring in the corpus:
## Metadata
dataset_info:
- config_name: business
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 4452207
num_examples: 9573
- name: test
num_bytes: 856798
num_examples: 1915
- name: train
num_bytes: 3171931
num_examples: 6701
- name: validation
num_bytes: 423478
num_examples: 957
download_size: 0
dataset_size: 8904414
- config_name: criminal
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 2807970
num_examples: 5375
- name: test
num_bytes: 520959
num_examples: 1089
- name: train
num_bytes: 1989662
num_examples: 3760
- name: validation
num_bytes: 297349
num_examples: 526
download_size: 0
dataset_size: 5615940
| [
"# Introduction\n\nThe recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.",
"## Corpus of Business Newswire Texts (business)\n\nThe Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.\n\nStatistical data on Named Entities occurring in the corpus:",
"### Reference\n\n> György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)",
"## Criminal NE corpus (criminal)\n\nThe Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.\nThere are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.\n\nStatistical data on Named Entities occurring in the corpus:",
"## Metadata\n\ndataset_info:\n- config_name: business\n features:\n - name: id\n dtype: string\n - name: tokens\n sequence: string\n - name: ner_tags\n sequence:\n class_label:\n names:\n 0: O\n 1: B-PER\n 2: I-PER\n 3: B-ORG\n 4: I-ORG\n 5: B-LOC\n 6: I-LOC\n 7: B-MISC\n 8: I-MISC\n - name: document_id\n dtype: string\n - name: sentence_id\n dtype: string\n splits:\n - name: original\n num_bytes: 4452207\n num_examples: 9573\n - name: test\n num_bytes: 856798\n num_examples: 1915\n - name: train\n num_bytes: 3171931\n num_examples: 6701\n - name: validation\n num_bytes: 423478\n num_examples: 957\n download_size: 0\n dataset_size: 8904414\n- config_name: criminal\n features:\n - name: id\n dtype: string\n - name: tokens\n sequence: string\n - name: ner_tags\n sequence:\n class_label:\n names:\n 0: O\n 1: B-PER\n 2: I-PER\n 3: B-ORG\n 4: I-ORG\n 5: B-LOC\n 6: I-LOC\n 7: B-MISC\n 8: I-MISC\n - name: document_id\n dtype: string\n - name: sentence_id\n dtype: string\n splits:\n - name: original\n num_bytes: 2807970\n num_examples: 5375\n - name: test\n num_bytes: 520959\n num_examples: 1089\n - name: train\n num_bytes: 1989662\n num_examples: 3760\n - name: validation\n num_bytes: 297349\n num_examples: 526\n download_size: 0\n dataset_size: 5615940"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Hungarian #hungarian #szeged #ner #region-us \n",
"# Introduction\n\nThe recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.",
"## Corpus of Business Newswire Texts (business)\n\nThe Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.\n\nStatistical data on Named Entities occurring in the corpus:",
"### Reference\n\n> György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)",
"## Criminal NE corpus (criminal)\n\nThe Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.\nThere are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.\n\nStatistical data on Named Entities occurring in the corpus:",
"## Metadata\n\ndataset_info:\n- config_name: business\n features:\n - name: id\n dtype: string\n - name: tokens\n sequence: string\n - name: ner_tags\n sequence:\n class_label:\n names:\n 0: O\n 1: B-PER\n 2: I-PER\n 3: B-ORG\n 4: I-ORG\n 5: B-LOC\n 6: I-LOC\n 7: B-MISC\n 8: I-MISC\n - name: document_id\n dtype: string\n - name: sentence_id\n dtype: string\n splits:\n - name: original\n num_bytes: 4452207\n num_examples: 9573\n - name: test\n num_bytes: 856798\n num_examples: 1915\n - name: train\n num_bytes: 3171931\n num_examples: 6701\n - name: validation\n num_bytes: 423478\n num_examples: 957\n download_size: 0\n dataset_size: 8904414\n- config_name: criminal\n features:\n - name: id\n dtype: string\n - name: tokens\n sequence: string\n - name: ner_tags\n sequence:\n class_label:\n names:\n 0: O\n 1: B-PER\n 2: I-PER\n 3: B-ORG\n 4: I-ORG\n 5: B-LOC\n 6: I-LOC\n 7: B-MISC\n 8: I-MISC\n - name: document_id\n dtype: string\n - name: sentence_id\n dtype: string\n splits:\n - name: original\n num_bytes: 2807970\n num_examples: 5375\n - name: test\n num_bytes: 520959\n num_examples: 1089\n - name: train\n num_bytes: 1989662\n num_examples: 3760\n - name: validation\n num_bytes: 297349\n num_examples: 526\n download_size: 0\n dataset_size: 5615940"
] |
64335ac3f9bfae6f6e2b467c6c904820ede01999 | # AutoTrain Dataset for project: testtextexists
## Dataset Description
This dataset has been automatically processed by AutoTrain for project testtextexists.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "According to the National Soft Drink Association, the annual consumption of soda by the U.S. citizens is 600 cans",
"target": 66.0
},
{
"text": "Experts say new vaccines are fake!",
"target": 50.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 19 |
| valid | 18 |
| LiveEvil/autotrain-data-testtextexists | [
"language:en",
"region:us"
] | 2022-11-02T15:54:22+00:00 | {"language": ["en"], "task_categories": ["text-scoring"]} | 2022-11-03T15:55:01+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| AutoTrain Dataset for project: testtextexists
=============================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project testtextexists.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
82e266d8effde67520d50532587b5f000237b50a |
# CSAbstruct
CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
## Dataset Construction Details
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
## Dataset Statistics
| Statistic | Avg ± std |
|--------------------------|-------------|
| Doc length in sentences | 6.7 ± 1.99 |
| Sentence length in words | 21.8 ± 10.0 |
| Label | % in Dataset |
|---------------|--------------|
| `BACKGROUND` | 33% |
| `METHOD` | 32% |
| `RESULT` | 21% |
| `OBJECTIVE` | 12% |
| `OTHER` | 03% |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
}
```
[1]: https://arxiv.org/abs/1909.04054
[2]: https://aclanthology.org/D19-1383
[3]: https://github.com/Franck-Dernoncourt/pubmed-rct
[4]: https://aclanthology.org/N18-3011/
[5]: https://www.figure-eight.com/
[6]: https://github.com/allenai/sequential_sentence_classification
| allenai/csabstruct | [
"license:apache-2.0",
"arxiv:1909.04054",
"region:us"
] | 2022-11-02T17:15:53+00:00 | {"license": "apache-2.0"} | 2022-11-02T17:54:38+00:00 | [
"1909.04054"
] | [] | TAGS
#license-apache-2.0 #arxiv-1909.04054 #region-us
| CSAbstruct
==========
CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology](URL), [arXiv](URL), [GitHub](URL)).
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT](URL) categories.
Dataset Construction Details
----------------------------
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT](URL) is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)](URL).
E4ch sentence is annotated by 5 workers on the [Figure-eight platform](URL), with one of 5 categories '{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}'.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT](URL), our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
Dataset Statistics
------------------
If you use this dataset, please cite the following paper:
| [] | [
"TAGS\n#license-apache-2.0 #arxiv-1909.04054 #region-us \n"
] |
e3dc6d24c7d76a0c9d1c20b6c838abbc918a36b0 |
# Dataset Card for COCO-Stuff
[](https://github.com/shunk031/huggingface-datasets_cocostuff/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://github.com/nightrome/cocostuff
- Repository: https://github.com/nightrome/cocostuff
- Paper (preprint): https://arxiv.org/abs/1612.03716
- Paper (CVPR2018): https://openaccess.thecvf.com/content_cvpr_2018/html/Caesar_COCO-Stuff_Thing_and_CVPR_2018_paper.html
### Dataset Summary
COCO-Stuff is the largest existing dataset with dense stuff and thing annotations.
From the paper:
> Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
All of annotations use English as primary language.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
from datasets import load_dataset
load_dataset("shunk031/cocostuff", "stuff-thing")
```
#### stuff-things
An example of looks as follows.
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>,
'image_filename': '000000000009.jpg',
'image_id': '9',
'width': 640
'height': 480,
'objects': [
{
'object_id': '121',
'x': 0,
'y': 11,
'w': 640,
'h': 469,
'name': 'food-other'
},
{
'object_id': '143',
'x': 0,
'y': 0
'w': 640,
'h': 480,
'name': 'plastic'
},
{
'object_id': '165',
'x': 0,
'y': 0,
'w': 319,
'h': 118,
'name': 'table'
},
{
'object_id': '183',
'x': 0,
'y': 2,
'w': 631,
'h': 472,
'name': 'unknown-183'
}
],
'stuff_map': <PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FCA0222D880>,
}
```
#### stuff-only
An example of looks as follows.
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>,
'image_filename': '000000000009.jpg',
'image_id': '9',
'width': 640
'height': 480,
'objects': [
{
'object_id': '121',
'x': 0,
'y': 11,
'w': 640,
'h': 469,
'name': 'food-other'
},
{
'object_id': '143',
'x': 0,
'y': 0
'w': 640,
'h': 480,
'name': 'plastic'
},
{
'object_id': '165',
'x': 0,
'y': 0,
'w': 319,
'h': 118,
'name': 'table'
},
{
'object_id': '183',
'x': 0,
'y': 2,
'w': 631,
'h': 472,
'name': 'unknown-183'
}
]
}
```
### Data Fields
#### stuff-things
- `image`: A `PIL.Image.Image` object containing the image.
- `image_id`: Unique numeric ID of the image.
- `image_filename`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `stuff_map`: A `PIL.Image.Image` object containing the Stuff + thing PNG-style annotations
- `objects`: Holds a list of `Object` data classes:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `name`: object name
#### stuff-only
- `image`: A `PIL.Image.Image` object containing the image.
- `image_id`: Unique numeric ID of the image.
- `image_filename`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `objects`: Holds a list of `Object` data classes:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `name`: object name
### Data Splits
| name | train | validation |
|-------------|--------:|-----------:|
| stuff-thing | 118,280 | 5,000 |
| stuff-only | 118,280 | 5,000 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
From the paper:
> COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:
- COCO images: [Flickr Terms of use](http://cocodataset.org/#termsofuse)
- COCO annotations: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
- COCO-Stuff annotations & code: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
### Citation Information
```bibtex
@INPROCEEDINGS{caesar2018cvpr,
title={COCO-Stuff: Thing and stuff classes in context},
author={Caesar, Holger and Uijlings, Jasper and Ferrari, Vittorio},
booktitle={Computer vision and pattern recognition (CVPR), 2018 IEEE conference on},
organization={IEEE},
year={2018}
}
```
### Contributions
Thanks to [@nightrome](https://github.com/nightrome) for publishing the COCO-Stuff dataset.
| shunk031/cocostuff | [
"language:en",
"license:cc-by-4.0",
"computer-vision",
"object-detection",
"ms-coco",
"arxiv:1612.03716",
"region:us"
] | 2022-11-02T17:47:27+00:00 | {"language": ["en"], "license": "cc-by-4.0", "tags": ["computer-vision", "object-detection", "ms-coco"], "datasets": ["stuff-thing", "stuff-only"], "metrics": ["accuracy", "iou"]} | 2022-12-09T04:29:27+00:00 | [
"1612.03716"
] | [
"en"
] | TAGS
#language-English #license-cc-by-4.0 #computer-vision #object-detection #ms-coco #arxiv-1612.03716 #region-us
| Dataset Card for COCO-Stuff
===========================
: URL
* Paper (CVPR2018): URL
### Dataset Summary
COCO-Stuff is the largest existing dataset with dense stuff and thing annotations.
From the paper:
>
> Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
>
>
>
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
All of annotations use English as primary language.
Dataset Structure
-----------------
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
#### stuff-things
An example of looks as follows.
#### stuff-only
An example of looks as follows.
### Data Fields
#### stuff-things
* 'image': A 'PIL.Image.Image' object containing the image.
* 'image\_id': Unique numeric ID of the image.
* 'image\_filename': File name of the image.
* 'width': Image width.
* 'height': Image height.
* 'stuff\_map': A 'PIL.Image.Image' object containing the Stuff + thing PNG-style annotations
* 'objects': Holds a list of 'Object' data classes:
+ 'object\_id': Unique numeric ID of the object.
+ 'x': x coordinate of bounding box's top left corner.
+ 'y': y coordinate of bounding box's top left corner.
+ 'w': Bounding box width.
+ 'h': Bounding box height.
+ 'name': object name
#### stuff-only
* 'image': A 'PIL.Image.Image' object containing the image.
* 'image\_id': Unique numeric ID of the image.
* 'image\_filename': File name of the image.
* 'width': Image width.
* 'height': Image height.
* 'objects': Holds a list of 'Object' data classes:
+ 'object\_id': Unique numeric ID of the object.
+ 'x': x coordinate of bounding box's top left corner.
+ 'y': y coordinate of bounding box's top left corner.
+ 'w': Bounding box width.
+ 'h': Bounding box height.
+ 'name': object name
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
From the paper:
>
> COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel.
>
>
>
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:
* COCO images: Flickr Terms of use
* COCO annotations: Creative Commons Attribution 4.0 License
* COCO-Stuff annotations & code: Creative Commons Attribution 4.0 License
### Contributions
Thanks to @nightrome for publishing the COCO-Stuff dataset.
| [
"### Dataset Summary\n\n\nCOCO-Stuff is the largest existing dataset with dense stuff and thing annotations.\n\n\nFrom the paper:\n\n\n\n> \n> Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.\n> \n> \n>",
"### Dataset Preprocessing",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nAll of annotations use English as primary language.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nWhen loading a specific configuration, users has to append a version dependent suffix:",
"#### stuff-things\n\n\nAn example of looks as follows.",
"#### stuff-only\n\n\nAn example of looks as follows.",
"### Data Fields",
"#### stuff-things\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'image\\_id': Unique numeric ID of the image.\n* 'image\\_filename': File name of the image.\n* 'width': Image width.\n* 'height': Image height.\n* 'stuff\\_map': A 'PIL.Image.Image' object containing the Stuff + thing PNG-style annotations\n* 'objects': Holds a list of 'Object' data classes:\n\t+ 'object\\_id': Unique numeric ID of the object.\n\t+ 'x': x coordinate of bounding box's top left corner.\n\t+ 'y': y coordinate of bounding box's top left corner.\n\t+ 'w': Bounding box width.\n\t+ 'h': Bounding box height.\n\t+ 'name': object name",
"#### stuff-only\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'image\\_id': Unique numeric ID of the image.\n* 'image\\_filename': File name of the image.\n* 'width': Image width.\n* 'height': Image height.\n* 'objects': Holds a list of 'Object' data classes:\n\t+ 'object\\_id': Unique numeric ID of the object.\n\t+ 'x': x coordinate of bounding box's top left corner.\n\t+ 'y': y coordinate of bounding box's top left corner.\n\t+ 'w': Bounding box width.\n\t+ 'h': Bounding box height.\n\t+ 'name': object name",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nFrom the paper:\n\n\n\n> \n> COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel.\n> \n> \n>",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCOCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:\n\n\n* COCO images: Flickr Terms of use\n* COCO annotations: Creative Commons Attribution 4.0 License\n* COCO-Stuff annotations & code: Creative Commons Attribution 4.0 License",
"### Contributions\n\n\nThanks to @nightrome for publishing the COCO-Stuff dataset."
] | [
"TAGS\n#language-English #license-cc-by-4.0 #computer-vision #object-detection #ms-coco #arxiv-1612.03716 #region-us \n",
"### Dataset Summary\n\n\nCOCO-Stuff is the largest existing dataset with dense stuff and thing annotations.\n\n\nFrom the paper:\n\n\n\n> \n> Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.\n> \n> \n>",
"### Dataset Preprocessing",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nAll of annotations use English as primary language.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nWhen loading a specific configuration, users has to append a version dependent suffix:",
"#### stuff-things\n\n\nAn example of looks as follows.",
"#### stuff-only\n\n\nAn example of looks as follows.",
"### Data Fields",
"#### stuff-things\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'image\\_id': Unique numeric ID of the image.\n* 'image\\_filename': File name of the image.\n* 'width': Image width.\n* 'height': Image height.\n* 'stuff\\_map': A 'PIL.Image.Image' object containing the Stuff + thing PNG-style annotations\n* 'objects': Holds a list of 'Object' data classes:\n\t+ 'object\\_id': Unique numeric ID of the object.\n\t+ 'x': x coordinate of bounding box's top left corner.\n\t+ 'y': y coordinate of bounding box's top left corner.\n\t+ 'w': Bounding box width.\n\t+ 'h': Bounding box height.\n\t+ 'name': object name",
"#### stuff-only\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'image\\_id': Unique numeric ID of the image.\n* 'image\\_filename': File name of the image.\n* 'width': Image width.\n* 'height': Image height.\n* 'objects': Holds a list of 'Object' data classes:\n\t+ 'object\\_id': Unique numeric ID of the object.\n\t+ 'x': x coordinate of bounding box's top left corner.\n\t+ 'y': y coordinate of bounding box's top left corner.\n\t+ 'w': Bounding box width.\n\t+ 'h': Bounding box height.\n\t+ 'name': object name",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nFrom the paper:\n\n\n\n> \n> COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel.\n> \n> \n>",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCOCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:\n\n\n* COCO images: Flickr Terms of use\n* COCO annotations: Creative Commons Attribution 4.0 License\n* COCO-Stuff annotations & code: Creative Commons Attribution 4.0 License",
"### Contributions\n\n\nThanks to @nightrome for publishing the COCO-Stuff dataset."
] |
88835bf225b88600767b73618ad4f6aa7ea4d77d |
# Sciamano Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by sciamano"```
If it is to strong just add [] around it.
Trained until 14000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/xlHVUJ4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Nsqdc5Q.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Av4NTd8.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ctVCTiY.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/kO6IE4S.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/sciamano | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-02T21:06:12+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-02T21:15:27+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Sciamano Artist Embedding / Textual Inversion
=============================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 14000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
768e7ebca5725cd852f4579d170a8726b061619d |
# John Kafka Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by john_kafka"```
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/aCnC1zv.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/FdBuWbG.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/1rkuXkZ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/5N9Wp7q.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/v2AkXjU.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/john_kafka | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-02T21:23:38+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-02T21:25:38+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| John Kafka Artist Embedding / Textual Inversion
===============================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
f480d9dfb53d9f3a663001496e929c9184cbeeea |
# Shatter Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by shatter_style"```
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ebXN3C2.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/7zUtEDQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/uEuKyBP.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/qRJ5o3E.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/FybZxbO.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/shatter_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-02T21:26:24+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-02T21:30:48+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Shatter Style Embedding / Textual Inversion
===========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
85a486545ea37fc9f2326e171ca42d32fcccf89a | This is the dataset! Not the .ckpt trained model - the model is located here: https://huggingface.co/0xJustin/Dungeons-and-Diffusion/tree/main
The newest version has manually captioned races and classes, and the model is trained with EveryDream. 30 images each of: aarakocra, aasimar, air_genasi, centaur, dragonborn, drow,
dwarf, earth_genasi, elf, firbolg, fire_genasi, gith, gnome, goblin, goliath, halfling, human, illithid, kenku, kobold, lizardfolk, minotaur, orc, tabaxi, thrikreen, tiefling, tortle, warforged, water_genasi
The original dataset includes ~2500 images of fantasy RPG character art. This dataset has a distribution of races and classes, though only races are annotated right now.
Additionally, BLIP captions were generated for all examples.
Thus, there are two datasets- one with the human generated race annotation formatted as 'D&D Character, {race}'
BLIP captions are formatted as 'D&D Character, {race} {caption}' for example: 'D&D Character, drow a woman with horns and horns'
Distribution of races:
({'kenku': 31,
'drow': 162,
'tiefling': 285,
'dwarf': 116,
'dragonborn': 110,
'gnome': 72,
'orc': 184,
'aasimar': 74,
'kobold': 61,
'aarakocra': 24,
'tabaxi': 123,
'genasi': 126,
'human': 652,
'elf': 190,
'goblin': 80,
'halfling': 52,
'centaur': 22,
'firbolg': 76,
'goliath': 35})
There is a high chance some images are mislabelled! Please feel free to enrich this dataset with whatever attributes you think might be useful! | 0xJustin/Dungeons-and-Diffusion | [
"region:us"
] | 2022-11-03T06:04:27+00:00 | {} | 2023-05-19T17:26:58+00:00 | [] | [] | TAGS
#region-us
| This is the dataset! Not the .ckpt trained model - the model is located here: URL
The newest version has manually captioned races and classes, and the model is trained with EveryDream. 30 images each of: aarakocra, aasimar, air_genasi, centaur, dragonborn, drow,
dwarf, earth_genasi, elf, firbolg, fire_genasi, gith, gnome, goblin, goliath, halfling, human, illithid, kenku, kobold, lizardfolk, minotaur, orc, tabaxi, thrikreen, tiefling, tortle, warforged, water_genasi
The original dataset includes ~2500 images of fantasy RPG character art. This dataset has a distribution of races and classes, though only races are annotated right now.
Additionally, BLIP captions were generated for all examples.
Thus, there are two datasets- one with the human generated race annotation formatted as 'D&D Character, {race}'
BLIP captions are formatted as 'D&D Character, {race} {caption}' for example: 'D&D Character, drow a woman with horns and horns'
Distribution of races:
({'kenku': 31,
'drow': 162,
'tiefling': 285,
'dwarf': 116,
'dragonborn': 110,
'gnome': 72,
'orc': 184,
'aasimar': 74,
'kobold': 61,
'aarakocra': 24,
'tabaxi': 123,
'genasi': 126,
'human': 652,
'elf': 190,
'goblin': 80,
'halfling': 52,
'centaur': 22,
'firbolg': 76,
'goliath': 35})
There is a high chance some images are mislabelled! Please feel free to enrich this dataset with whatever attributes you think might be useful! | [] | [
"TAGS\n#region-us \n"
] |
ab4b90142da320df49a31aaa9fa8df1df67d123f | # Dataset Card for "music_genres_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/music_genres_small | [
"region:us"
] | 2022-11-03T13:36:11+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "song_id", "dtype": "int64"}, {"name": "genre_id", "dtype": "int64"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 392427659.9527852, "num_examples": 1000}], "download_size": 390675126, "dataset_size": 392427659.9527852}} | 2022-11-03T13:36:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "music_genres_small"
More Information needed | [
"# Dataset Card for \"music_genres_small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"music_genres_small\"\n\nMore Information needed"
] |
17e87976452beb6cd28dd83ee3b98604fca98632 | # Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Markmus/amazon-shoe-reviews | [
"region:us"
] | 2022-11-03T13:41:22+00:00 | {"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}, {"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}], "download_size": 10939033, "dataset_size": 18719628.0}} | 2022-11-03T13:41:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "amazon-shoe-reviews"
More Information needed | [
"# Dataset Card for \"amazon-shoe-reviews\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon-shoe-reviews\"\n\nMore Information needed"
] |
c0e1f6c4ab0b7ec8268e9eed39185c002df10344 | # Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Matthaios/amazon-shoe-reviews | [
"region:us"
] | 2022-11-03T13:43:26+00:00 | {"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}, {"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}], "download_size": 10939031, "dataset_size": 18719628.0}} | 2022-11-03T13:43:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "amazon-shoe-reviews"
More Information needed | [
"# Dataset Card for \"amazon-shoe-reviews\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon-shoe-reviews\"\n\nMore Information needed"
] |
490bc8a946289d68fe7c628afa5c36b52ca8f9e3 |
# PyCoder
This repository contains the dataset for the paper [Syntax-Aware On-the-Fly Code Completion](https://arxiv.org/abs/2211.04673)
The sample code to run the model can be found in directory: "`assets/notebooks/inference.ipynb`" in our GitHub: https://github.com/awsm-research/pycoder.
PyCoder is an auto code completion model which leverages a Multi-Task Training technique (MTT) to cooperatively
learn the code prediction task and the type prediction task. For the type prediction
task, we propose to leverage the standard Python token
type information (e.g., String, Number, Name, Keyword),
which is readily available and lightweight, instead of using
the AST information which requires source code to be parsable for an extraction, limiting its ability to perform on-the-fly code completion (see Section 2.3 in our paper).
More information can be found in our paper.
If you use our code or PyCoder, please cite our paper.
<pre><code>@article{takerngsaksiri2022syntax,
title={Syntax-Aware On-the-Fly Code Completion},
author={Takerngsaksiri, Wannita and Tantithamthavorn, Chakkrit and Li, Yuan-Fang},
journal={arXiv preprint arXiv:2211.04673},
year={2022}
}</code></pre>
| Wannita/PyCoder | [
"task_categories:text-generation",
"license:mit",
"code",
"arxiv:2211.04673",
"region:us"
] | 2022-11-03T13:45:53+00:00 | {"license": "mit", "task_categories": ["text-generation"], "datasets": ["Wannita/PyCoder"], "metrics": ["accuracy", "bleu", "meteor", "exact_match", "rouge"], "library_name": "transformers", "pipeline_tag": "text-generation", "tags": ["code"]} | 2023-03-29T14:52:53+00:00 | [
"2211.04673"
] | [] | TAGS
#task_categories-text-generation #license-mit #code #arxiv-2211.04673 #region-us
|
# PyCoder
This repository contains the dataset for the paper Syntax-Aware On-the-Fly Code Completion
The sample code to run the model can be found in directory: "'assets/notebooks/URL'" in our GitHub: URL
PyCoder is an auto code completion model which leverages a Multi-Task Training technique (MTT) to cooperatively
learn the code prediction task and the type prediction task. For the type prediction
task, we propose to leverage the standard Python token
type information (e.g., String, Number, Name, Keyword),
which is readily available and lightweight, instead of using
the AST information which requires source code to be parsable for an extraction, limiting its ability to perform on-the-fly code completion (see Section 2.3 in our paper).
More information can be found in our paper.
If you use our code or PyCoder, please cite our paper.
<pre><code>@article{takerngsaksiri2022syntax,
title={Syntax-Aware On-the-Fly Code Completion},
author={Takerngsaksiri, Wannita and Tantithamthavorn, Chakkrit and Li, Yuan-Fang},
journal={arXiv preprint arXiv:2211.04673},
year={2022}
}</code></pre>
| [
"# PyCoder\n\nThis repository contains the dataset for the paper Syntax-Aware On-the-Fly Code Completion\n\nThe sample code to run the model can be found in directory: \"'assets/notebooks/URL'\" in our GitHub: URL\n\nPyCoder is an auto code completion model which leverages a Multi-Task Training technique (MTT) to cooperatively\nlearn the code prediction task and the type prediction task. For the type prediction\ntask, we propose to leverage the standard Python token\ntype information (e.g., String, Number, Name, Keyword),\nwhich is readily available and lightweight, instead of using\nthe AST information which requires source code to be parsable for an extraction, limiting its ability to perform on-the-fly code completion (see Section 2.3 in our paper). \n\nMore information can be found in our paper.\n\nIf you use our code or PyCoder, please cite our paper.\n\n<pre><code>@article{takerngsaksiri2022syntax,\n title={Syntax-Aware On-the-Fly Code Completion},\n author={Takerngsaksiri, Wannita and Tantithamthavorn, Chakkrit and Li, Yuan-Fang},\n journal={arXiv preprint arXiv:2211.04673},\n year={2022}\n}</code></pre>"
] | [
"TAGS\n#task_categories-text-generation #license-mit #code #arxiv-2211.04673 #region-us \n",
"# PyCoder\n\nThis repository contains the dataset for the paper Syntax-Aware On-the-Fly Code Completion\n\nThe sample code to run the model can be found in directory: \"'assets/notebooks/URL'\" in our GitHub: URL\n\nPyCoder is an auto code completion model which leverages a Multi-Task Training technique (MTT) to cooperatively\nlearn the code prediction task and the type prediction task. For the type prediction\ntask, we propose to leverage the standard Python token\ntype information (e.g., String, Number, Name, Keyword),\nwhich is readily available and lightweight, instead of using\nthe AST information which requires source code to be parsable for an extraction, limiting its ability to perform on-the-fly code completion (see Section 2.3 in our paper). \n\nMore information can be found in our paper.\n\nIf you use our code or PyCoder, please cite our paper.\n\n<pre><code>@article{takerngsaksiri2022syntax,\n title={Syntax-Aware On-the-Fly Code Completion},\n author={Takerngsaksiri, Wannita and Tantithamthavorn, Chakkrit and Li, Yuan-Fang},\n journal={arXiv preprint arXiv:2211.04673},\n year={2022}\n}</code></pre>"
] |
7af12b091affeb6e55d0f4871dc98af83fabe28b | ---
# Dataset Card for KAMEL: Knowledge Analysis with Multitoken Entities in Language Models
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://github.com/JanKalo/KAMEL
- **Repository:**
https://github.com/JanKalo/KAMEL
- **Paper:**
@inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
}
### Dataset Summary
This dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge
from Wikidata and Wikipedia.
See the paper for more details. For more information, also see:
https://github.com/JanKalo/KAMEL
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
KAMEL has the following fields:
* index: the id
* sub_label: a label for the subject
* obj_uri: Wikidata uri for the object
* obj_labels: multiple labels for the object
* chosen_label: the preferred label
* rel_uri: Wikidata uri for the relation
* rel_label: a label for the relation
### Data Splits
The dataset is split into a training, validation, and test dataset.
It contains 234 Wikidata relations.
For each relation there exist 200 training, 100 validation,
and 100 test instances.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to explore what knowledge graph facts are memorized by large language models.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created from Wikidata and Wikipedia.
### Annotations
#### Annotation process
There is no human annotation, but only automatic linking from Wikidata facts to Wikipedia articles.
The details about the process can be found in the paper.
#### Who are the annotators?
Machine Annotations
### Personal and Sensitive Information
Unkown, but likely information about famous people mentioned in the English Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is created from Wikipedia and Wikidata, the existing biases from these two data sources may also be reflected in KAMEL.
## Additional Information
### Dataset Curators
The authors of KAMEL at Vrije Universiteit Amsterdam and Technische Universität Braunschweig.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
}
| LeandraFichtel/KAMEL | [
"region:us"
] | 2022-11-03T14:00:02+00:00 | {} | 2022-11-03T16:39:49+00:00 | [] | [] | TAGS
#region-us
| ---
# Dataset Card for KAMEL: Knowledge Analysis with Multitoken Entities in Language Models
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
URL
- Repository:
URL
- Paper:
@inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
}
### Dataset Summary
This dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge
from Wikidata and Wikipedia.
See the paper for more details. For more information, also see:
URL
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
KAMEL has the following fields:
* index: the id
* sub_label: a label for the subject
* obj_uri: Wikidata uri for the object
* obj_labels: multiple labels for the object
* chosen_label: the preferred label
* rel_uri: Wikidata uri for the relation
* rel_label: a label for the relation
### Data Splits
The dataset is split into a training, validation, and test dataset.
It contains 234 Wikidata relations.
For each relation there exist 200 training, 100 validation,
and 100 test instances.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to explore what knowledge graph facts are memorized by large language models.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created from Wikidata and Wikipedia.
### Annotations
#### Annotation process
There is no human annotation, but only automatic linking from Wikidata facts to Wikipedia articles.
The details about the process can be found in the paper.
#### Who are the annotators?
Machine Annotations
### Personal and Sensitive Information
Unkown, but likely information about famous people mentioned in the English Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is created from Wikipedia and Wikidata, the existing biases from these two data sources may also be reflected in KAMEL.
## Additional Information
### Dataset Curators
The authors of KAMEL at Vrije Universiteit Amsterdam and Technische Universität Braunschweig.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see URL
@inproceedings{kalo2022kamel,
title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},
author={Kalo, Jan-Christoph and Fichtel, Leandra},
booktitle={Automated Knowledge Base Construction},
year={2022}
}
| [
"# Dataset Card for KAMEL: Knowledge Analysis with Multitoken Entities in Language Models",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\n@inproceedings{kalo2022kamel,\n title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},\n author={Kalo, Jan-Christoph and Fichtel, Leandra},\n booktitle={Automated Knowledge Base Construction},\n year={2022}\n}",
"### Dataset Summary\nThis dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge\nfrom Wikidata and Wikipedia.\n\nSee the paper for more details. For more information, also see:\nURL",
"### Languages\nen",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\nKAMEL has the following fields:\n* index: the id\n* sub_label: a label for the subject \n* obj_uri: Wikidata uri for the object \n* obj_labels: multiple labels for the object\n* chosen_label: the preferred label \n* rel_uri: Wikidata uri for the relation\n* rel_label: a label for the relation",
"### Data Splits\nThe dataset is split into a training, validation, and test dataset.\nIt contains 234 Wikidata relations. \nFor each relation there exist 200 training, 100 validation,\nand 100 test instances.",
"## Dataset Creation",
"### Curation Rationale\nThis dataset was gathered and created to explore what knowledge graph facts are memorized by large language models.",
"### Source Data",
"#### Initial Data Collection and Normalization\nSee the reaserch paper and website for more detail. The dataset was\ncreated from Wikidata and Wikipedia.",
"### Annotations",
"#### Annotation process\nThere is no human annotation, but only automatic linking from Wikidata facts to Wikipedia articles.\nThe details about the process can be found in the paper.",
"#### Who are the annotators?\nMachine Annotations",
"### Personal and Sensitive Information\nUnkown, but likely information about famous people mentioned in the English Wikipedia.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe goal for the work is to probe the understanding of language models.",
"### Discussion of Biases\nSince the data is created from Wikipedia and Wikidata, the existing biases from these two data sources may also be reflected in KAMEL.",
"## Additional Information",
"### Dataset Curators\nThe authors of KAMEL at Vrije Universiteit Amsterdam and Technische Universität Braunschweig.",
"### Licensing Information\nThe Creative Commons Attribution-Noncommercial 4.0 International License. see URL\n\n@inproceedings{kalo2022kamel,\n title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},\n author={Kalo, Jan-Christoph and Fichtel, Leandra},\n booktitle={Automated Knowledge Base Construction},\n year={2022}\n}"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for KAMEL: Knowledge Analysis with Multitoken Entities in Language Models",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\n@inproceedings{kalo2022kamel,\n title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},\n author={Kalo, Jan-Christoph and Fichtel, Leandra},\n booktitle={Automated Knowledge Base Construction},\n year={2022}\n}",
"### Dataset Summary\nThis dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge\nfrom Wikidata and Wikipedia.\n\nSee the paper for more details. For more information, also see:\nURL",
"### Languages\nen",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\nKAMEL has the following fields:\n* index: the id\n* sub_label: a label for the subject \n* obj_uri: Wikidata uri for the object \n* obj_labels: multiple labels for the object\n* chosen_label: the preferred label \n* rel_uri: Wikidata uri for the relation\n* rel_label: a label for the relation",
"### Data Splits\nThe dataset is split into a training, validation, and test dataset.\nIt contains 234 Wikidata relations. \nFor each relation there exist 200 training, 100 validation,\nand 100 test instances.",
"## Dataset Creation",
"### Curation Rationale\nThis dataset was gathered and created to explore what knowledge graph facts are memorized by large language models.",
"### Source Data",
"#### Initial Data Collection and Normalization\nSee the reaserch paper and website for more detail. The dataset was\ncreated from Wikidata and Wikipedia.",
"### Annotations",
"#### Annotation process\nThere is no human annotation, but only automatic linking from Wikidata facts to Wikipedia articles.\nThe details about the process can be found in the paper.",
"#### Who are the annotators?\nMachine Annotations",
"### Personal and Sensitive Information\nUnkown, but likely information about famous people mentioned in the English Wikipedia.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe goal for the work is to probe the understanding of language models.",
"### Discussion of Biases\nSince the data is created from Wikipedia and Wikidata, the existing biases from these two data sources may also be reflected in KAMEL.",
"## Additional Information",
"### Dataset Curators\nThe authors of KAMEL at Vrije Universiteit Amsterdam and Technische Universität Braunschweig.",
"### Licensing Information\nThe Creative Commons Attribution-Noncommercial 4.0 International License. see URL\n\n@inproceedings{kalo2022kamel,\n title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models},\n author={Kalo, Jan-Christoph and Fichtel, Leandra},\n booktitle={Automated Knowledge Base Construction},\n year={2022}\n}"
] |
dfa2ec4ee00fcd57232b5edaa3e37a5ab1c0985e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ce107](https://huggingface.co/ce107) for evaluating this model. | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-fc121d-1975865996 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-03T14:10:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-03T14:11:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ce107 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ce107 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ce107 for evaluating this model."
] |
4162853a87a970f96bdb689dcdc35732d8aaa854 |
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words).
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing)
# Citation
https://arxiv.org/abs/2211.03358
```bib
@inproceedings{sileo-moens-2023-probing,
title = "Probing neural language models for understanding of words of estimative probability",
author = "Sileo, Damien and
Moens, Marie-francine",
booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.starsem-1.41",
doi = "10.18653/v1/2023.starsem-1.41",
pages = "469--476",
}
```
| sileod/probability_words_nli | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"wep",
"words of estimative probability",
"probability",
"logical reasoning",
"soft logic",
"nli",
"verbal probabilities",
"natural-language-inference",
"reasoning",
"logic",
"arxiv:2211.03358",
"region:us"
] | 2022-11-03T14:21:14+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification", "multiple-choice", "question-answering"], "task_ids": ["open-domain-qa", "multiple-choice-qa", "natural-language-inference", "multi-input-text-classification"], "pretty_name": "probability_words_nli", "paperwithcoode_id": "probability-words-nli", "tags": ["wep", "words of estimative probability", "probability", "logical reasoning", "soft logic", "nli", "verbal probabilities", "natural-language-inference", "reasoning", "logic"], "train-eval-index": [{"config": "usnli", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "context", "sentence2": "hypothesis", "label": "label"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 binary"}]}, {"config": "reasoning-1hop", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "context", "sentence2": "hypothesis", "label": "label"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 binary"}]}, {"config": "reasoning-2hop", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "context", "sentence2": "hypothesis", "label": "label"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 binary"}]}]} | 2023-09-06T13:56:43+00:00 | [
"2211.03358"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-multiple-choice #task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #task_ids-natural-language-inference #task_ids-multi-input-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #wep #words of estimative probability #probability #logical reasoning #soft logic #nli #verbal probabilities #natural-language-inference #reasoning #logic #arxiv-2211.03358 #region-us
|
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (URL to directly check whether models can detect the WEP matching human-annotated probabilities according to Fagen-Ulmschneider, 2018.
The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).
Code : colab
URL
| [
"# Dataset accompanying the \"Probing neural language models for understanding of words of estimative probability\" article\n\nThis dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like \"probably\", \"maybe\", \"surely\", \"impossible\".\n\nWe used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (URL to directly check whether models can detect the WEP matching human-annotated probabilities according to Fagen-Ulmschneider, 2018.\nThe dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).\n\nCode : colab\n\nURL"
] | [
"TAGS\n#task_categories-text-classification #task_categories-multiple-choice #task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #task_ids-natural-language-inference #task_ids-multi-input-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #wep #words of estimative probability #probability #logical reasoning #soft logic #nli #verbal probabilities #natural-language-inference #reasoning #logic #arxiv-2211.03358 #region-us \n",
"# Dataset accompanying the \"Probing neural language models for understanding of words of estimative probability\" article\n\nThis dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like \"probably\", \"maybe\", \"surely\", \"impossible\".\n\nWe used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (URL to directly check whether models can detect the WEP matching human-annotated probabilities according to Fagen-Ulmschneider, 2018.\nThe dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis).\n\nCode : colab\n\nURL"
] |
772d7f4015382026d97b6c8a2e477a8a3f1fbbc6 | # Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | popaqy/my_dataset | [
"region:us"
] | 2022-11-03T14:27:51+00:00 | {"dataset_info": {"features": [{"name": "bg", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "bg_wrong", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1792707, "num_examples": 3442}], "download_size": 908032, "dataset_size": 1792707}} | 2022-11-03T14:27:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "my_dataset"
More Information needed | [
"# Dataset Card for \"my_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"my_dataset\"\n\nMore Information needed"
] |
2ac5bf4dc855aacdfc4ec1bdf9691d721207c3a6 |
# Dataset Card for Polish ASR BIGOS corpora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/michaljunczyk/pl-asr-bigos
- **Repository:** https://github.com/goodmike31/pl-asr-bigos-tools
- **Paper:** https://annals-csis.org/proceedings/2023/drp/1609.html
- **Leaderboard:** https://huggingface.co/spaces/michaljunczyk/pl-asr-bigos-benchmark
- **Point of Contact:** [email protected]
### Dataset Summary
The BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br>
The initial release consist of test split with 1900 recordings and original transcriptions extracted from 10 publicly available datasets.
### Supported Tasks and Leaderboards
The leaderboard with benchmark of publicly available ASR systems supporting Polish is [under construction](https://huggingface.co/spaces/michaljunczyk/pl-asr-bigos-benchmark/).<br>
Evaluation results of 3 commercial and 5 freely available can be found in the [paper](https://annals-csis.org/proceedings/2023/drp/1609.html).
### Languages
Polish
## Dataset Structure
Dataset consists audio recordings in WAV format and corresponding metadata.<br>
Audio and metadata can be used in raw format (TSV) or via hugging face datasets library.
### Data Instances
1900 audio files with original transcriptions are available in "test" split.<br>
This consitutes 1.6% of the total available transcribed speech in 10 source datasets considered in the initial release.
### Data Fields
Available fields:
* file_id - file identifier
* dataset_id - source dataset identifier
* audio - binary representation of audio file
* ref_original - original transcription of audio file
* hyp_whisper_cloud - ASR hypothesis (output) from Whisper Cloud system
* hyp_google_default - ASR hypothesis (output) from Google ASR system, default model
* hyp_azure_default - ASR hypothesis (output) from Azure ASR system, default model
* hyp_whisper_tiny - ASR hypothesis (output) from Whisper tiny model
* hyp_whisper_base - ASR hypothesis (output) from Whisper base model
* hyp_whisper_small - ASR hypothesis (output) from Whisper small model
* hyp_whisper_medium - ASR hypothesis (output) from Whisper medium model
* hyp_whisper_large - ASR hypothesis (output) from Whisper large (V2) model
<br><br>
Fields to be added in the next release:
* ref_spoken - manual transcription in a spoken format (without normalization)
* ref_written - manual transcription in a written format (with normalization)
### Data Splits
Initial release contains only "test" split.<br>
"Dev" and "train" splits will be added in the next release.
## Dataset Creation
### Curation Rationale
[Polish ASR Speech Data Catalog](https://github.com/goodmike31/pl-asr-speech-data-survey) was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br>
The following mandatory criteria were considered:
* Dataset must be downloadable.
* The license must allow for free, noncommercial use.
* Transcriptions must be available and align with the recordings.
* The sampling rate of audio recordings must be at least 8 kHz.
* Audio encoding using a minimum of 16 bits per sample.
### Source Data
10 datasets that meet the criteria were chosen as sources for the BIGOS dataset.
* The Common Voice dataset (mozilla-common-voice-19)
* The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)
* The Clarin Studio Corpus (clarin-pjatk-studio-15)
* The Clarin Mobile Corpus (clarin-pjatk-mobile-15)
* The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info [here](https://www.ii.pwr.edu.pl/)
* The Munich-AI Labs Speech corpus (mailabs-19)
* The AZON Read and Spontaneous Speech Corpora (pwr-azon-spont-20, pwr-azon-read-20) More info [here](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy)
#### Initial Data Collection and Normalization
Source text and audio files were extracted and encoded in a unified format.<br>
Dataset-specific transcription norms are preserved, including punctuation and casing. <br>
To strike a balance in the evaluation dataset and to facilitate the comparison of Word Error Rate (WER) scores across multiple datasets, 200 samples are randomly selected from each corpus. <br>
The only exception is ’pwr-azon-spont-20’, which contains significantly longer recordings and utterances, therefore only 100 samples are selected. <br>
#### Who are the source language producers?
1. Clarin corpora - Polish Japanese Academy of Technology
2. Common Voice - Mozilla foundation
3. Multlingual librispeech - Facebook AI research lab
4. Jerzy Sas and AZON datasets - Politechnika Wrocławska
Please refer to the [paper](https://www.researchgate.net/publication/374713542_BIGOS_-_Benchmark_Intended_Grouping_of_Open_Speech_Corpora_for_Polish_Automatic_Speech_Recognition) for more details.
### Annotations
#### Annotation process
Current release contains original transcriptions.
Manual transcriptions are planned for subsequent releases.
#### Who are the annotators?
Depends on the source dataset.
### Personal and Sensitive Information
This corpus does not contain PII or Sensitive Information.
All IDs pf speakers are anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
To be updated.
### Discussion of Biases
To be updated.
### Other Known Limitations
The dataset in the initial release contains only a subset of recordings from original datasets.
## Additional Information
### Dataset Curators
Original authors of the source datasets - please refer to [source-data](#source-data) for details.
Michał Junczyk ([email protected]) - curator of BIGOS corpora.
### Licensing Information
The BIGOS corpora is available under [Creative Commons By Attribution Share Alike 4.0 license.](https://creativecommons.org/licenses/by-sa/4.0/)
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)
* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).
* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)
* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)
* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)
* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)
### Citation Information
Please cite [BIGOS V1 paper](https://annals-csis.org/proceedings/2023/drp/1609.html).
### Contributions
Thanks to [@goodmike31](https://github.com/goodmike31) for adding this dataset. | michaljunczyk/pl-asr-bigos | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:other",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|common_voice",
"language:pl",
"license:cc-by-sa-4.0",
"benchmark",
"polish",
"asr",
"speech",
"doi:10.57967/hf/1068",
"region:us"
] | 2022-11-03T16:38:50+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated", "other", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated", "other"], "language": ["pl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "pl-asr-bigos", "tags": ["benchmark", "polish", "asr", "speech"], "extra_gated_prompt": "Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:\n* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)\n* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).\n* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)\n* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)\n* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)\n* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)\nTo use selected dataset, you also need to fill in the access forms on the specific datasets pages:\n* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0", "extra_gated_fields": {"I hereby confirm that I have read and accepted the license terms of datasets comprising BIGOS corpora": "checkbox", "I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox"}} | 2024-01-08T17:14:38+00:00 | [] | [
"pl"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #annotations_creators-expert-generated #annotations_creators-other #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-Polish #license-cc-by-sa-4.0 #benchmark #polish #asr #speech #doi-10.57967/hf/1068 #region-us
|
# Dataset Card for Polish ASR BIGOS corpora
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: michal.junczyk@URL
### Dataset Summary
The BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br>
The initial release consist of test split with 1900 recordings and original transcriptions extracted from 10 publicly available datasets.
### Supported Tasks and Leaderboards
The leaderboard with benchmark of publicly available ASR systems supporting Polish is under construction.<br>
Evaluation results of 3 commercial and 5 freely available can be found in the paper.
### Languages
Polish
## Dataset Structure
Dataset consists audio recordings in WAV format and corresponding metadata.<br>
Audio and metadata can be used in raw format (TSV) or via hugging face datasets library.
### Data Instances
1900 audio files with original transcriptions are available in "test" split.<br>
This consitutes 1.6% of the total available transcribed speech in 10 source datasets considered in the initial release.
### Data Fields
Available fields:
* file_id - file identifier
* dataset_id - source dataset identifier
* audio - binary representation of audio file
* ref_original - original transcription of audio file
* hyp_whisper_cloud - ASR hypothesis (output) from Whisper Cloud system
* hyp_google_default - ASR hypothesis (output) from Google ASR system, default model
* hyp_azure_default - ASR hypothesis (output) from Azure ASR system, default model
* hyp_whisper_tiny - ASR hypothesis (output) from Whisper tiny model
* hyp_whisper_base - ASR hypothesis (output) from Whisper base model
* hyp_whisper_small - ASR hypothesis (output) from Whisper small model
* hyp_whisper_medium - ASR hypothesis (output) from Whisper medium model
* hyp_whisper_large - ASR hypothesis (output) from Whisper large (V2) model
<br><br>
Fields to be added in the next release:
* ref_spoken - manual transcription in a spoken format (without normalization)
* ref_written - manual transcription in a written format (with normalization)
### Data Splits
Initial release contains only "test" split.<br>
"Dev" and "train" splits will be added in the next release.
## Dataset Creation
### Curation Rationale
Polish ASR Speech Data Catalog was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br>
The following mandatory criteria were considered:
* Dataset must be downloadable.
* The license must allow for free, noncommercial use.
* Transcriptions must be available and align with the recordings.
* The sampling rate of audio recordings must be at least 8 kHz.
* Audio encoding using a minimum of 16 bits per sample.
### Source Data
10 datasets that meet the criteria were chosen as sources for the BIGOS dataset.
* The Common Voice dataset (mozilla-common-voice-19)
* The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)
* The Clarin Studio Corpus (clarin-pjatk-studio-15)
* The Clarin Mobile Corpus (clarin-pjatk-mobile-15)
* The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info here
* The Munich-AI Labs Speech corpus (mailabs-19)
* The AZON Read and Spontaneous Speech Corpora (pwr-azon-spont-20, pwr-azon-read-20) More info here
#### Initial Data Collection and Normalization
Source text and audio files were extracted and encoded in a unified format.<br>
Dataset-specific transcription norms are preserved, including punctuation and casing. <br>
To strike a balance in the evaluation dataset and to facilitate the comparison of Word Error Rate (WER) scores across multiple datasets, 200 samples are randomly selected from each corpus. <br>
The only exception is ’pwr-azon-spont-20’, which contains significantly longer recordings and utterances, therefore only 100 samples are selected. <br>
#### Who are the source language producers?
1. Clarin corpora - Polish Japanese Academy of Technology
2. Common Voice - Mozilla foundation
3. Multlingual librispeech - Facebook AI research lab
4. Jerzy Sas and AZON datasets - Politechnika Wrocławska
Please refer to the paper for more details.
### Annotations
#### Annotation process
Current release contains original transcriptions.
Manual transcriptions are planned for subsequent releases.
#### Who are the annotators?
Depends on the source dataset.
### Personal and Sensitive Information
This corpus does not contain PII or Sensitive Information.
All IDs pf speakers are anonymized.
## Considerations for Using the Data
### Social Impact of Dataset
To be updated.
### Discussion of Biases
To be updated.
### Other Known Limitations
The dataset in the initial release contains only a subset of recordings from original datasets.
## Additional Information
### Dataset Curators
Original authors of the source datasets - please refer to source-data for details.
Michał Junczyk (michal.junczyk@URL) - curator of BIGOS corpora.
### Licensing Information
The BIGOS corpora is available under Creative Commons By Attribution Share Alike 4.0 license.
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
* Creative Commons 0 which applies to Common Voice
* Creative Commons By Attribution Share Alike 4.0, which applies to Clarin Cyfry, Azon acoustic speech resources corpus.
* Creative Commons By Attribution 3.0, which applies to CLARIN Mobile database, CLARIN Studio database, PELCRA Spelling and Numbers Voice Database and FLEURS dataset
* Creative Commons By Attribution 4.0, which applies to Multilingual Librispeech and Poly AI Minds 14
* Proprietiary License of Munich AI Labs dataset
* Public domain mark, which applies to PWR datasets
Please cite BIGOS V1 paper.
### Contributions
Thanks to @goodmike31 for adding this dataset. | [
"# Dataset Card for Polish ASR BIGOS corpora",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: michal.junczyk@URL",
"### Dataset Summary\n\nThe BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br>\nThe initial release consist of test split with 1900 recordings and original transcriptions extracted from 10 publicly available datasets.",
"### Supported Tasks and Leaderboards\nThe leaderboard with benchmark of publicly available ASR systems supporting Polish is under construction.<br>\nEvaluation results of 3 commercial and 5 freely available can be found in the paper.",
"### Languages\nPolish",
"## Dataset Structure\nDataset consists audio recordings in WAV format and corresponding metadata.<br>\nAudio and metadata can be used in raw format (TSV) or via hugging face datasets library.",
"### Data Instances\n1900 audio files with original transcriptions are available in \"test\" split.<br>\nThis consitutes 1.6% of the total available transcribed speech in 10 source datasets considered in the initial release.",
"### Data Fields\nAvailable fields:\n* file_id - file identifier\n* dataset_id - source dataset identifier\n* audio - binary representation of audio file\n* ref_original - original transcription of audio file\n* hyp_whisper_cloud - ASR hypothesis (output) from Whisper Cloud system\n* hyp_google_default - ASR hypothesis (output) from Google ASR system, default model\n* hyp_azure_default - ASR hypothesis (output) from Azure ASR system, default model\n* hyp_whisper_tiny - ASR hypothesis (output) from Whisper tiny model\n* hyp_whisper_base - ASR hypothesis (output) from Whisper base model\n* hyp_whisper_small - ASR hypothesis (output) from Whisper small model\n* hyp_whisper_medium - ASR hypothesis (output) from Whisper medium model\n* hyp_whisper_large - ASR hypothesis (output) from Whisper large (V2) model\n<br><br>\n\nFields to be added in the next release:\n* ref_spoken - manual transcription in a spoken format (without normalization)\n* ref_written - manual transcription in a written format (with normalization)",
"### Data Splits\nInitial release contains only \"test\" split.<br>\n\"Dev\" and \"train\" splits will be added in the next release.",
"## Dataset Creation",
"### Curation Rationale\nPolish ASR Speech Data Catalog was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br>\nThe following mandatory criteria were considered:\n* Dataset must be downloadable.\n* The license must allow for free, noncommercial use.\n* Transcriptions must be available and align with the recordings.\n* The sampling rate of audio recordings must be at least 8 kHz.\n* Audio encoding using a minimum of 16 bits per sample.",
"### Source Data\n10 datasets that meet the criteria were chosen as sources for the BIGOS dataset.\n* The Common Voice dataset (mozilla-common-voice-19)\n* The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)\n* The Clarin Studio Corpus (clarin-pjatk-studio-15)\n* The Clarin Mobile Corpus (clarin-pjatk-mobile-15)\n* The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info here\n* The Munich-AI Labs Speech corpus (mailabs-19)\n* The AZON Read and Spontaneous Speech Corpora (pwr-azon-spont-20, pwr-azon-read-20) More info here",
"#### Initial Data Collection and Normalization\nSource text and audio files were extracted and encoded in a unified format.<br>\nDataset-specific transcription norms are preserved, including punctuation and casing. <br>\nTo strike a balance in the evaluation dataset and to facilitate the comparison of Word Error Rate (WER) scores across multiple datasets, 200 samples are randomly selected from each corpus. <br>\nThe only exception is ’pwr-azon-spont-20’, which contains significantly longer recordings and utterances, therefore only 100 samples are selected. <br>",
"#### Who are the source language producers?\n1. Clarin corpora - Polish Japanese Academy of Technology\n2. Common Voice - Mozilla foundation\n3. Multlingual librispeech - Facebook AI research lab\n4. Jerzy Sas and AZON datasets - Politechnika Wrocławska\n\nPlease refer to the paper for more details.",
"### Annotations",
"#### Annotation process\n\nCurrent release contains original transcriptions.\nManual transcriptions are planned for subsequent releases.",
"#### Who are the annotators?\nDepends on the source dataset.",
"### Personal and Sensitive Information\nThis corpus does not contain PII or Sensitive Information.\nAll IDs pf speakers are anonymized.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nTo be updated.",
"### Discussion of Biases\nTo be updated.",
"### Other Known Limitations\nThe dataset in the initial release contains only a subset of recordings from original datasets.",
"## Additional Information",
"### Dataset Curators\nOriginal authors of the source datasets - please refer to source-data for details.\n\nMichał Junczyk (michal.junczyk@URL) - curator of BIGOS corpora.",
"### Licensing Information\nThe BIGOS corpora is available under Creative Commons By Attribution Share Alike 4.0 license.\n\nOriginal datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:\n* Creative Commons 0 which applies to Common Voice\n* Creative Commons By Attribution Share Alike 4.0, which applies to Clarin Cyfry, Azon acoustic speech resources corpus.\n* Creative Commons By Attribution 3.0, which applies to CLARIN Mobile database, CLARIN Studio database, PELCRA Spelling and Numbers Voice Database and FLEURS dataset\n* Creative Commons By Attribution 4.0, which applies to Multilingual Librispeech and Poly AI Minds 14\n* Proprietiary License of Munich AI Labs dataset\n* Public domain mark, which applies to PWR datasets\n\n\nPlease cite BIGOS V1 paper.",
"### Contributions\n\nThanks to @goodmike31 for adding this dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #annotations_creators-expert-generated #annotations_creators-other #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-Polish #license-cc-by-sa-4.0 #benchmark #polish #asr #speech #doi-10.57967/hf/1068 #region-us \n",
"# Dataset Card for Polish ASR BIGOS corpora",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: michal.junczyk@URL",
"### Dataset Summary\n\nThe BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br>\nThe initial release consist of test split with 1900 recordings and original transcriptions extracted from 10 publicly available datasets.",
"### Supported Tasks and Leaderboards\nThe leaderboard with benchmark of publicly available ASR systems supporting Polish is under construction.<br>\nEvaluation results of 3 commercial and 5 freely available can be found in the paper.",
"### Languages\nPolish",
"## Dataset Structure\nDataset consists audio recordings in WAV format and corresponding metadata.<br>\nAudio and metadata can be used in raw format (TSV) or via hugging face datasets library.",
"### Data Instances\n1900 audio files with original transcriptions are available in \"test\" split.<br>\nThis consitutes 1.6% of the total available transcribed speech in 10 source datasets considered in the initial release.",
"### Data Fields\nAvailable fields:\n* file_id - file identifier\n* dataset_id - source dataset identifier\n* audio - binary representation of audio file\n* ref_original - original transcription of audio file\n* hyp_whisper_cloud - ASR hypothesis (output) from Whisper Cloud system\n* hyp_google_default - ASR hypothesis (output) from Google ASR system, default model\n* hyp_azure_default - ASR hypothesis (output) from Azure ASR system, default model\n* hyp_whisper_tiny - ASR hypothesis (output) from Whisper tiny model\n* hyp_whisper_base - ASR hypothesis (output) from Whisper base model\n* hyp_whisper_small - ASR hypothesis (output) from Whisper small model\n* hyp_whisper_medium - ASR hypothesis (output) from Whisper medium model\n* hyp_whisper_large - ASR hypothesis (output) from Whisper large (V2) model\n<br><br>\n\nFields to be added in the next release:\n* ref_spoken - manual transcription in a spoken format (without normalization)\n* ref_written - manual transcription in a written format (with normalization)",
"### Data Splits\nInitial release contains only \"test\" split.<br>\n\"Dev\" and \"train\" splits will be added in the next release.",
"## Dataset Creation",
"### Curation Rationale\nPolish ASR Speech Data Catalog was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br>\nThe following mandatory criteria were considered:\n* Dataset must be downloadable.\n* The license must allow for free, noncommercial use.\n* Transcriptions must be available and align with the recordings.\n* The sampling rate of audio recordings must be at least 8 kHz.\n* Audio encoding using a minimum of 16 bits per sample.",
"### Source Data\n10 datasets that meet the criteria were chosen as sources for the BIGOS dataset.\n* The Common Voice dataset (mozilla-common-voice-19)\n* The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)\n* The Clarin Studio Corpus (clarin-pjatk-studio-15)\n* The Clarin Mobile Corpus (clarin-pjatk-mobile-15)\n* The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info here\n* The Munich-AI Labs Speech corpus (mailabs-19)\n* The AZON Read and Spontaneous Speech Corpora (pwr-azon-spont-20, pwr-azon-read-20) More info here",
"#### Initial Data Collection and Normalization\nSource text and audio files were extracted and encoded in a unified format.<br>\nDataset-specific transcription norms are preserved, including punctuation and casing. <br>\nTo strike a balance in the evaluation dataset and to facilitate the comparison of Word Error Rate (WER) scores across multiple datasets, 200 samples are randomly selected from each corpus. <br>\nThe only exception is ’pwr-azon-spont-20’, which contains significantly longer recordings and utterances, therefore only 100 samples are selected. <br>",
"#### Who are the source language producers?\n1. Clarin corpora - Polish Japanese Academy of Technology\n2. Common Voice - Mozilla foundation\n3. Multlingual librispeech - Facebook AI research lab\n4. Jerzy Sas and AZON datasets - Politechnika Wrocławska\n\nPlease refer to the paper for more details.",
"### Annotations",
"#### Annotation process\n\nCurrent release contains original transcriptions.\nManual transcriptions are planned for subsequent releases.",
"#### Who are the annotators?\nDepends on the source dataset.",
"### Personal and Sensitive Information\nThis corpus does not contain PII or Sensitive Information.\nAll IDs pf speakers are anonymized.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nTo be updated.",
"### Discussion of Biases\nTo be updated.",
"### Other Known Limitations\nThe dataset in the initial release contains only a subset of recordings from original datasets.",
"## Additional Information",
"### Dataset Curators\nOriginal authors of the source datasets - please refer to source-data for details.\n\nMichał Junczyk (michal.junczyk@URL) - curator of BIGOS corpora.",
"### Licensing Information\nThe BIGOS corpora is available under Creative Commons By Attribution Share Alike 4.0 license.\n\nOriginal datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:\n* Creative Commons 0 which applies to Common Voice\n* Creative Commons By Attribution Share Alike 4.0, which applies to Clarin Cyfry, Azon acoustic speech resources corpus.\n* Creative Commons By Attribution 3.0, which applies to CLARIN Mobile database, CLARIN Studio database, PELCRA Spelling and Numbers Voice Database and FLEURS dataset\n* Creative Commons By Attribution 4.0, which applies to Multilingual Librispeech and Poly AI Minds 14\n* Proprietiary License of Munich AI Labs dataset\n* Public domain mark, which applies to PWR datasets\n\n\nPlease cite BIGOS V1 paper.",
"### Contributions\n\nThanks to @goodmike31 for adding this dataset."
] |
548191053344a231c016a74927e87fae9fef786d |
# Dataset Card for DocEE Dataset
## Dataset Description
- **Homepage:**
- **Repository:** [DocEE Dataset repository](https://github.com/tongmeihan1995/docee)
- **Paper:** [DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction](https://aclanthology.org/2022.naacl-main.291/)
### Dataset Summary
DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction.
### Data Fields
- `title`: TODO
- `text`: TODO
- `event_type`: TODO
- `date`: TODO
- `metadata`: TODO
**Note: this repo contains only event detection portion of the dataset.**
### Data Splits
The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types.
#### Differences from the original split(s)
Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.
Originally, the `title` column additionally contained information from `date` and `metadata` columns. They are now separated into three columns: `date`, `metadata` and `title`. | fkdosilovic/docee-event-classification | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"wiki",
"news",
"event-detection",
"region:us"
] | 2022-11-03T20:30:39+00:00 | {"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "DocEE", "tags": ["wiki", "news", "event-detection"]} | 2022-11-03T21:39:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #wiki #news #event-detection #region-us
|
# Dataset Card for DocEE Dataset
## Dataset Description
- Homepage:
- Repository: DocEE Dataset repository
- Paper: DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction
### Dataset Summary
DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction.
### Data Fields
- 'title': TODO
- 'text': TODO
- 'event_type': TODO
- 'date': TODO
- 'metadata': TODO
Note: this repo contains only event detection portion of the dataset.
### Data Splits
The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types.
#### Differences from the original split(s)
Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.
Originally, the 'title' column additionally contained information from 'date' and 'metadata' columns. They are now separated into three columns: 'date', 'metadata' and 'title'. | [
"# Dataset Card for DocEE Dataset",
"## Dataset Description\n\n- Homepage:\n- Repository: DocEE Dataset repository\n- Paper: DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction",
"### Dataset Summary\n\nDocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction.",
"### Data Fields\n\n- 'title': TODO\n- 'text': TODO\n- 'event_type': TODO\n- 'date': TODO\n- 'metadata': TODO\n\nNote: this repo contains only event detection portion of the dataset.",
"### Data Splits\n\nThe dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types.",
"#### Differences from the original split(s)\n\nOriginally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.\n\nOriginally, the 'title' column additionally contained information from 'date' and 'metadata' columns. They are now separated into three columns: 'date', 'metadata' and 'title'."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #wiki #news #event-detection #region-us \n",
"# Dataset Card for DocEE Dataset",
"## Dataset Description\n\n- Homepage:\n- Repository: DocEE Dataset repository\n- Paper: DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction",
"### Dataset Summary\n\nDocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction.",
"### Data Fields\n\n- 'title': TODO\n- 'text': TODO\n- 'event_type': TODO\n- 'date': TODO\n- 'metadata': TODO\n\nNote: this repo contains only event detection portion of the dataset.",
"### Data Splits\n\nThe dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types.",
"#### Differences from the original split(s)\n\nOriginally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types.\n\nOriginally, the 'title' column additionally contained information from 'date' and 'metadata' columns. They are now separated into three columns: 'date', 'metadata' and 'title'."
] |
8b48d820c4bc9f34966fb2ee24f3adb783d20d88 |
# Dataset Card for Beeple Everyday
Dataset used to train [beeple-diffusion](https://huggingface.co/riccardogiorato/beeple-diffusion).
The original images were obtained from [twitter.com/beeple](https://twitter.com/beeple/media).
## Citation
If you use this dataset, please cite it as:
```
@misc{gioratobeeple-everyday,
author = {Riccardo, Giorato},
title = {Beeple Everyday},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/riccardogiorato/beeple-everyday/}}
}
```
| riccardogiorato/beeple-everyday | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-03T21:03:32+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-03T21:12:57+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
|
# Dataset Card for Beeple Everyday
Dataset used to train beeple-diffusion.
The original images were obtained from URL
If you use this dataset, please cite it as:
| [
"# Dataset Card for Beeple Everyday\n\nDataset used to train beeple-diffusion.\n\nThe original images were obtained from URL\n\nIf you use this dataset, please cite it as:"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"# Dataset Card for Beeple Everyday\n\nDataset used to train beeple-diffusion.\n\nThe original images were obtained from URL\n\nIf you use this dataset, please cite it as:"
] |
b3187f53037e244e39c29606e357bdd411b46801 | # Dataset Card for "dtic_sent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | stauntonjr/dtic_sent | [
"region:us"
] | 2022-11-03T22:30:39+00:00 | {"dataset_info": {"features": [{"name": "Accession Number", "dtype": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Descriptive Note", "dtype": "string"}, {"name": "Corporate Author", "dtype": "string"}, {"name": "Personal Author(s)", "sequence": "string"}, {"name": "Report Date", "dtype": "string"}, {"name": "Pagination or Media Count", "dtype": "string"}, {"name": "Descriptors", "sequence": "string"}, {"name": "Subject Categories", "dtype": "string"}, {"name": "Distribution Statement", "dtype": "string"}, {"name": "fulltext", "dtype": "string"}, {"name": "cleantext", "dtype": "string"}, {"name": "sents", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 6951041151, "num_examples": 27425}], "download_size": 3712549813, "dataset_size": 6951041151}} | 2022-11-03T23:37:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dtic_sent"
More Information needed | [
"# Dataset Card for \"dtic_sent\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dtic_sent\"\n\nMore Information needed"
] |
ac0a9507326eaf1752d6209cec2b6b46d8113cbd |
# Dataset Card for QA-Portuguese
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Portuguese preprocessed split from [MQA dataset](https://huggingface.co/datasets/clips/mqa).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
| ju-resplande/qa-pt | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|mqa",
"language:pt",
"license:cc0-1.0",
"region:us"
] | 2022-11-03T22:57:12+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["pt"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|mqa"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "pretty_name": "qa-portuguese"} | 2022-11-25T20:31:56+00:00 | [] | [
"pt"
] | TAGS
#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|mqa #language-Portuguese #license-cc0-1.0 #region-us
|
# Dataset Card for QA-Portuguese
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Portuguese preprocessed split from MQA dataset.
### Supported Tasks and Leaderboards
### Languages
The dataset is Portuguese.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @ju-resplande for adding this dataset.
| [
"# Dataset Card for QA-Portuguese",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nPortuguese preprocessed split from MQA dataset.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe dataset is Portuguese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @ju-resplande for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|mqa #language-Portuguese #license-cc0-1.0 #region-us \n",
"# Dataset Card for QA-Portuguese",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nPortuguese preprocessed split from MQA dataset.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe dataset is Portuguese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @ju-resplande for adding this dataset."
] |
4acd51b06d689bf2d0cb95dce6b552909584e8ba |
# Nixeu Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by nixeu_style"```
Use the Embedding with one of [SirVeggies](https://huggingface.co/SirVeggie) Nixeu or Wlop models for best results
If it is to strong just add [] around it.
Trained until 8400 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/5Rg6a3N.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/oWqYTHL.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/45GFoZf.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NU8Rc4z.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Yvl836l.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/nixeu_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-03T23:29:09+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-03T23:36:01+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Nixeu Style Embedding / Textual Inversion
=========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
Use the Embedding with one of SirVeggies Nixeu or Wlop models for best results
If it is to strong just add [] around it.
Trained until 8400 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
849be46ab60cfbd53a5bd950538253aecd6cea78 |
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://aclanthology.org/P18-1177/](https://aclanthology.org/P18-1177/)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the QA dataset collected by [Harvesting Paragraph-level Question-Answer Pairs from Wikipedia](https://aclanthology.org/P18-1177) (Du & Cardie, ACL 2018).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|test |
|--------:|---------:|-------:|
|1,204,925| 30,293| 24,473|
## Citation Information
```
@inproceedings{du-cardie-2018-harvesting,
title = "Harvesting Paragraph-level Question-Answer Pairs from {W}ikipedia",
author = "Du, Xinya and
Cardie, Claire",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1177",
doi = "10.18653/v1/P18-1177",
pages = "1907--1917",
abstract = "We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. As compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-the-art. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top ranking Wikipedia articles and create a corpus of over one million question-answer pairs. We provide qualitative analysis for the this large-scale generated corpus from Wikipedia.",
}
``` | lmqg/qa_harvesting_from_wikipedia | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:1M<",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-04T06:30:51+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "1M<", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Harvesting QA paris from Wikipedia."} | 2022-11-05T03:19:40+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-1M< #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #region-us
| Dataset Card for "lmqg/qa\_harvesting\_from\_wikipedia"
=======================================================
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Point of Contact: Asahi Ushio
### Dataset Summary
This is the QA dataset collected by Harvesting Paragraph-level Question-Answer Pairs from Wikipedia (Du & Cardie, ACL 2018).
### Supported Tasks and Leaderboards
* 'question-answering'
### Languages
English (en)
Dataset Structure
-----------------
### Data Fields
The data fields are the same among all splits.
#### plain\_text
* 'id': a 'string' feature of id
* 'title': a 'string' feature of title of the paragraph
* 'context': a 'string' feature of paragraph
* 'question': a 'string' feature of question
* 'answers': a 'json' feature of answers
### Data Splits
| [
"### Dataset Summary\n\n\nThis is the QA dataset collected by Harvesting Paragraph-level Question-Answer Pairs from Wikipedia (Du & Cardie, ACL 2018).",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answering'",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature of id\n* 'title': a 'string' feature of title of the paragraph\n* 'context': a 'string' feature of paragraph\n* 'question': a 'string' feature of question\n* 'answers': a 'json' feature of answers",
"### Data Splits"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-1M< #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThis is the QA dataset collected by Harvesting Paragraph-level Question-Answer Pairs from Wikipedia (Du & Cardie, ACL 2018).",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answering'",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature of id\n* 'title': a 'string' feature of title of the paragraph\n* 'context': a 'string' feature of paragraph\n* 'question': a 'string' feature of question\n* 'answers': a 'json' feature of answers",
"### Data Splits"
] |
7d5efeb7e157099ebd0f630628e64b1cdc97f6e2 | # Dataset Card for "auto_content"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ayush2609/auto_content | [
"region:us"
] | 2022-11-04T09:32:38+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25207.5885509839, "num_examples": 503}, {"name": "validation", "num_bytes": 2806.4114490161, "num_examples": 56}], "download_size": 19771, "dataset_size": 28014.0}} | 2022-11-04T09:32:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "auto_content"
More Information needed | [
"# Dataset Card for \"auto_content\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"auto_content\"\n\nMore Information needed"
] |
cc540899103705a0cb87bea53bda71fa14a80737 | # Dataset Card for "answerable_tydiqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa | [
"region:us"
] | 2022-11-04T09:44:49+00:00 | {"dataset_info": {"features": [{"name": "question_text", "dtype": "string"}, {"name": "document_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "annotations", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}, {"name": "document_plaintext", "dtype": "string"}, {"name": "document_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32084629.326371837, "num_examples": 29868}, {"name": "validation", "num_bytes": 3778385.324427767, "num_examples": 3712}], "download_size": 16354337, "dataset_size": 35863014.6507996}} | 2022-11-04T09:45:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa\"\n\nMore Information needed"
] |
f71b7973349141cb8a3d40b6ee2797830f62ae68 | # Dataset Card for "answerable_tydiqa_restructured"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_restructured | [
"region:us"
] | 2022-11-04T09:45:21+00:00 | {"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "references", "struct": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21940019, "num_examples": 29868}, {"name": "validation", "num_bytes": 2730209, "num_examples": 3712}], "download_size": 17468684, "dataset_size": 24670228}} | 2022-11-04T09:45:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_restructured"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_restructured\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_restructured\"\n\nMore Information needed"
] |
90b5976050208f4ab764422c334b95dfd681e4f0 | # Dataset Card for "answerable_tydiqa_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_preprocessed | [
"region:us"
] | 2022-11-04T09:46:00+00:00 | {"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "references", "struct": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21252073.336011786, "num_examples": 29800}, {"name": "validation", "num_bytes": 2657400.5792025863, "num_examples": 3709}], "download_size": 16838253, "dataset_size": 23909473.91521437}} | 2022-11-04T09:46:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_preprocessed"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_preprocessed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_preprocessed\"\n\nMore Information needed"
] |
b20f6950ca9773dac84e57b2f052cc9c3fcdf448 | # Dataset Card for "answerable_tydiqa_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_tokenized | [
"region:us"
] | 2022-11-04T09:46:52+00:00 | {"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "question", "sequence": "string"}, {"name": "context", "sequence": "string"}, {"name": "references", "struct": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "labels", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 30320669, "num_examples": 29800}, {"name": "validation", "num_bytes": 3761508, "num_examples": 3709}], "download_size": 17981416, "dataset_size": 34082177}} | 2022-11-04T09:47:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_tokenized"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_tokenized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_tokenized\"\n\nMore Information needed"
] |
148e1cda53c9697ea386953a60e8493dbd102cb1 |
# Guweiz Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by guweiz_style"```
If it is to strong just add [] around it.
Trained until 9000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/eCbB30e.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/U1Fezud.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/DqruJgs.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/O7VV7BS.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/k4sIsvH.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/guweiz_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-04T10:11:35+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-04T10:14:19+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Guweiz Artist Embedding / Textual Inversion
===========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 9000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
60d8a487125ced60f6cd19e37aac3739d135b6b5 | # Dataset Card for "tx-data-to-decode"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lucapro/tx-data-to-decode | [
"region:us"
] | 2022-11-04T10:21:51+00:00 | {"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "de", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3527858, "num_examples": 6057}], "download_size": 995171, "dataset_size": 3527858}} | 2022-11-04T10:22:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tx-data-to-decode"
More Information needed | [
"# Dataset Card for \"tx-data-to-decode\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tx-data-to-decode\"\n\nMore Information needed"
] |
626de4a1bf832412aed03cd731b74bc5ac978fcb | # Dataset Card for "icd10-reference-cm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rjac/icd10-reference-cm | [
"region:us"
] | 2022-11-04T11:23:22+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "icd10_tc_category", "dtype": "string"}, {"name": "icd10_tc_category_group", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13286095, "num_examples": 71480}], "download_size": 2715065, "dataset_size": 13286095}} | 2022-11-04T11:23:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "icd10-reference-cm"
More Information needed | [
"# Dataset Card for \"icd10-reference-cm\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"icd10-reference-cm\"\n\nMore Information needed"
] |
587e3170fcb95d51295acfea053c6570cedd8a41 | # Dataset Card for "Pierse-movie-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Pierse-movie-dataset | [
"region:us"
] | 2022-11-04T11:34:53+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53518991.51408206, "num_examples": 1873138}, {"name": "validation", "num_bytes": 5946570.485917939, "num_examples": 208127}], "download_size": 33525659, "dataset_size": 59465562.0}} | 2022-11-04T11:35:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Pierse-movie-dataset"
More Information needed | [
"# Dataset Card for \"Pierse-movie-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Pierse-movie-dataset\"\n\nMore Information needed"
] |
7f5cd8bfac9cee6eb3a88ba576779a76c30bf806 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions
* Dataset: joelito/brazilian_court_decisions
* Config: joelito--brazilian_court_decisions
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466167 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-04T13:21:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["joelito/brazilian_court_decisions"], "eval_info": {"task": "multi_class_classification", "model": "Luciano/bertimbau-base-finetuned-brazilian_court_decisions", "metrics": [], "dataset_name": "joelito/brazilian_court_decisions", "dataset_config": "joelito--brazilian_court_decisions", "dataset_split": "test", "col_mapping": {"text": "decision_description", "target": "judgment_label"}}} | 2022-11-04T13:22:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions
* Dataset: joelito/brazilian_court_decisions
* Config: joelito--brazilian_court_decisions
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions\n* Dataset: joelito/brazilian_court_decisions\n* Config: joelito--brazilian_court_decisions\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions\n* Dataset: joelito/brazilian_court_decisions\n* Config: joelito--brazilian_court_decisions\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
04201c6a1a1cb7f50160ab3b0e0a7a630bef5463 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions
* Dataset: joelito/brazilian_court_decisions
* Config: joelito--brazilian_court_decisions
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466168 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-04T13:21:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["joelito/brazilian_court_decisions"], "eval_info": {"task": "multi_class_classification", "model": "Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions", "metrics": [], "dataset_name": "joelito/brazilian_court_decisions", "dataset_config": "joelito--brazilian_court_decisions", "dataset_split": "test", "col_mapping": {"text": "decision_description", "target": "judgment_label"}}} | 2022-11-04T13:22:29+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions
* Dataset: joelito/brazilian_court_decisions
* Config: joelito--brazilian_court_decisions
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions\n* Dataset: joelito/brazilian_court_decisions\n* Config: joelito--brazilian_court_decisions\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions\n* Dataset: joelito/brazilian_court_decisions\n* Config: joelito--brazilian_court_decisions\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
46f712c7d0dbfb4aaa83bdce8c4f9a4c2f080e69 | # Dataset Card for "test_splits_order"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_splits_order | [
"region:us"
] | 2022-11-04T13:30:41+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 32, "num_examples": 2}, {"name": "train", "num_bytes": 48, "num_examples": 2}], "download_size": 1776, "dataset_size": 80}} | 2022-11-04T13:30:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_splits_order"
More Information needed | [
"# Dataset Card for \"test_splits_order\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_splits_order\"\n\nMore Information needed"
] |
0a118a6d943dba991d968c909121d7e231f968f0 | # Dataset Card for "test_splits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_splits | [
"region:us"
] | 2022-11-04T13:53:18+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 116, "num_examples": 8}, {"name": "test", "num_bytes": 46, "num_examples": 3}], "download_size": 1698, "dataset_size": 162}} | 2022-11-04T13:59:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_splits"
More Information needed | [
"# Dataset Card for \"test_splits\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_splits\"\n\nMore Information needed"
] |
0ff5ded4caccbfeb631f5f70ea3e19a773e0004e | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| duyngtr16061999/pokemon_fashion_mixed | [
"region:us"
] | 2022-11-04T15:30:52+00:00 | {} | 2022-11-04T16:21:57+00:00 | [] | [] | TAGS
#region-us
| ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\nThanks to @github-username for adding this dataset."
] |
0308f18780cb95bcb0625b1d0fa798c15d3aa250 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MauritsG](https://huggingface.co/MauritsG) for evaluating this model. | autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-103f11-1986766201 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-04T15:49:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": ["recall", "precision"], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-04T15:49:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @MauritsG for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @MauritsG for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @MauritsG for evaluating this model."
] |
c4c55382a58a997f57ff1100eff6696d1574204d | # Dataset Card for "dirt_teff2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roydcarlson/dirt_teff2 | [
"region:us"
] | 2022-11-04T17:28:46+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6436424.0, "num_examples": 7}], "download_size": 6352411, "dataset_size": 6436424.0}} | 2022-11-04T17:28:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dirt_teff2"
More Information needed | [
"# Dataset Card for \"dirt_teff2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dirt_teff2\"\n\nMore Information needed"
] |
f2675b210a774ec7e8116c38acb39e724f101ea4 | # Dataset Card for "sidewalk-imagery2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | roydcarlson/sidewalk-imagery2 | [
"region:us"
] | 2022-11-04T18:41:10+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138394.0, "num_examples": 10}], "download_size": 3139599, "dataset_size": 3138394.0}} | 2022-11-04T18:41:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sidewalk-imagery2"
More Information needed | [
"# Dataset Card for \"sidewalk-imagery2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sidewalk-imagery2\"\n\nMore Information needed"
] |
5976a1c9abfe4c8a216fccd28cd199d22a53a40a | hjvjjv | codysoccerman/my_test_dataset | [
"region:us"
] | 2022-11-04T19:13:19+00:00 | {} | 2022-11-20T01:05:46+00:00 | [] | [] | TAGS
#region-us
| hjvjjv | [] | [
"TAGS\n#region-us \n"
] |
590f8ab8f495a868ca9d191a4fd0fb4255d0788a | # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets:
* **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* **Not Offensive (NOT)**: Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.

## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train'))
sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test'))
```
The dataset contains of the following columns.
* **post_id** - Twitter ID
* **text** - Post text
* **tokens** - Tokenised text. Each token is seperated by a space.
* **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* **label** - Sentence-level label, offensive or not-offensive.

SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train'))
```
The dataset contains following columns
* **post_id** - Twitter ID
* **text** - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r requirements.txt
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
## Citation
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ | sinhala-nlp/SOLD | [
"region:us"
] | 2022-11-04T19:45:07+00:00 | {} | 2022-12-20T20:19:41+00:00 | [] | [] | TAGS
#region-us
| # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. SOLD is the largest offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID (Zampieri et al., 2019). We asked annotators to discriminate between the following types of tweets:
* Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* Not Offensive (NOT): Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain (Mathew et al., 2021), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.
!Alt text
## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
The dataset contains of the following columns.
* post_id - Twitter ID
* text - Post text
* tokens - Tokenised text. Each token is seperated by a space.
* rationals - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* label - Sentence-level label, offensive or not-offensive.
!Alt text
SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
The dataset contains following columns
* post_id - Twitter ID
* text - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r URL
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ | [
"# SOLD - A Benchmark for Sinhala Offensive Language Identification\n\nIn this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. SOLD is the largest offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.\n\n:warning: This repository contains texts that may be offensive and harmful.",
"## Annotation\nWe use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).",
"### Sentence-level \nOur sentence-level offensive language detection follows level A in OLID (Zampieri et al., 2019). We asked annotators to discriminate between the following types of tweets:\n* Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.\n* Not Offensive (NOT): Posts that do not contain offense or profanity.\n\nEach tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.",
"### Token-level\nTo provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain (Mathew et al., 2021), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.\n\n\n!Alt text",
"## Data\nSOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code. \n\n\nThe dataset contains of the following columns. \n* post_id - Twitter ID\n* text - Post text\n* tokens - Tokenised text. Each token is seperated by a space. \n* rationals - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.\n* label - Sentence-level label, offensive or not-offensive. \n\n!Alt text\n\nSemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code. \n\n\nThe dataset contains following columns \n* post_id - Twitter ID\n* text - Post text\n\nFurthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm",
"## Experiments\nClone the repository and install the libraries using the following command (preferably inside a conda environment)\n\n~~~\npip install -r URL\n~~~",
"### Sentence-level\nSentence-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_deepoffense\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hi, en or si).\n * hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).\n * en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).\n * si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~\n\nSentence-level CNN and LSTM based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_offensive_nn\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the architecture (cnn2D, lstm).\n--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~",
"### Token-level\nToken-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_mudes\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hatex or tsd).\n * hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).\n * tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).\n~~~\n\nToken-level LIME experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_lime\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n~~~",
"## Acknowledgments\nWe want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.\n\nIf you are using the dataset or the models please cite the following paper\n~~~\n@article{ranasinghe2022sold,\n title={SOLD: Sinhala Offensive Language Dataset},\n author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},\n journal={arXiv preprint arXiv:2212.00851},\n year={2022}\n}\n~~~"
] | [
"TAGS\n#region-us \n",
"# SOLD - A Benchmark for Sinhala Offensive Language Identification\n\nIn this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. SOLD is the largest offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.\n\n:warning: This repository contains texts that may be offensive and harmful.",
"## Annotation\nWe use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).",
"### Sentence-level \nOur sentence-level offensive language detection follows level A in OLID (Zampieri et al., 2019). We asked annotators to discriminate between the following types of tweets:\n* Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.\n* Not Offensive (NOT): Posts that do not contain offense or profanity.\n\nEach tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.",
"### Token-level\nTo provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain (Mathew et al., 2021), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.\n\n\n!Alt text",
"## Data\nSOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code. \n\n\nThe dataset contains of the following columns. \n* post_id - Twitter ID\n* text - Post text\n* tokens - Tokenised text. Each token is seperated by a space. \n* rationals - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.\n* label - Sentence-level label, offensive or not-offensive. \n\n!Alt text\n\nSemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code. \n\n\nThe dataset contains following columns \n* post_id - Twitter ID\n* text - Post text\n\nFurthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm",
"## Experiments\nClone the repository and install the libraries using the following command (preferably inside a conda environment)\n\n~~~\npip install -r URL\n~~~",
"### Sentence-level\nSentence-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_deepoffense\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hi, en or si).\n * hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).\n * en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).\n * si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~\n\nSentence-level CNN and LSTM based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_offensive_nn\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the architecture (cnn2D, lstm).\n--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~",
"### Token-level\nToken-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_mudes\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hatex or tsd).\n * hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).\n * tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).\n~~~\n\nToken-level LIME experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_lime\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n~~~",
"## Acknowledgments\nWe want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.\n\nIf you are using the dataset or the models please cite the following paper\n~~~\n@article{ranasinghe2022sold,\n title={SOLD: Sinhala Offensive Language Dataset},\n author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},\n journal={arXiv preprint arXiv:2212.00851},\n year={2022}\n}\n~~~"
] |
d3c6aafbdaca0dac1274db14f142f0c20a5348b2 | # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets:
* **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* **Not Offensive (NOT)**: Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.

## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train'))
sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test'))
```
The dataset contains of the following columns.
* **post_id** - Twitter ID
* **text** - Post text
* **tokens** - Tokenised text. Each token is seperated by a space.
* **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* **label** - Sentence-level label, offensive or not-offensive.

SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train'))
```
The dataset contains following columns
* **post_id** - Twitter ID
* **text** - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r requirements.txt
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
## Citation
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ | sinhala-nlp/SemiSOLD | [
"region:us"
] | 2022-11-04T20:42:38+00:00 | {} | 2022-12-20T20:21:26+00:00 | [] | [] | TAGS
#region-us
| # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. SOLD is the largest offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID (Zampieri et al., 2019). We asked annotators to discriminate between the following types of tweets:
* Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* Not Offensive (NOT): Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain (Mathew et al., 2021), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.
!Alt text
## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
The dataset contains of the following columns.
* post_id - Twitter ID
* text - Post text
* tokens - Tokenised text. Each token is seperated by a space.
* rationals - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* label - Sentence-level label, offensive or not-offensive.
!Alt text
SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
The dataset contains following columns
* post_id - Twitter ID
* text - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r URL
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ | [
"# SOLD - A Benchmark for Sinhala Offensive Language Identification\n\nIn this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. SOLD is the largest offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.\n\n:warning: This repository contains texts that may be offensive and harmful.",
"## Annotation\nWe use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).",
"### Sentence-level \nOur sentence-level offensive language detection follows level A in OLID (Zampieri et al., 2019). We asked annotators to discriminate between the following types of tweets:\n* Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.\n* Not Offensive (NOT): Posts that do not contain offense or profanity.\n\nEach tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.",
"### Token-level\nTo provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain (Mathew et al., 2021), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.\n\n\n!Alt text",
"## Data\nSOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code. \n\n\nThe dataset contains of the following columns. \n* post_id - Twitter ID\n* text - Post text\n* tokens - Tokenised text. Each token is seperated by a space. \n* rationals - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.\n* label - Sentence-level label, offensive or not-offensive. \n\n!Alt text\n\nSemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code. \n\n\nThe dataset contains following columns \n* post_id - Twitter ID\n* text - Post text\n\nFurthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm",
"## Experiments\nClone the repository and install the libraries using the following command (preferably inside a conda environment)\n\n~~~\npip install -r URL\n~~~",
"### Sentence-level\nSentence-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_deepoffense\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hi, en or si).\n * hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).\n * en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).\n * si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~\n\nSentence-level CNN and LSTM based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_offensive_nn\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the architecture (cnn2D, lstm).\n--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~",
"### Token-level\nToken-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_mudes\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hatex or tsd).\n * hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).\n * tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).\n~~~\n\nToken-level LIME experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_lime\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n~~~",
"## Acknowledgments\nWe want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.\n\nIf you are using the dataset or the models please cite the following paper\n~~~\n@article{ranasinghe2022sold,\n title={SOLD: Sinhala Offensive Language Dataset},\n author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},\n journal={arXiv preprint arXiv:2212.00851},\n year={2022}\n}\n~~~"
] | [
"TAGS\n#region-us \n",
"# SOLD - A Benchmark for Sinhala Offensive Language Identification\n\nIn this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. SOLD is the largest offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.\n\n:warning: This repository contains texts that may be offensive and harmful.",
"## Annotation\nWe use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).",
"### Sentence-level \nOur sentence-level offensive language detection follows level A in OLID (Zampieri et al., 2019). We asked annotators to discriminate between the following types of tweets:\n* Offensive (OFF): Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.\n* Not Offensive (NOT): Posts that do not contain offense or profanity.\n\nEach tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.",
"### Token-level\nTo provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain (Mathew et al., 2021), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.\n\n\n!Alt text",
"## Data\nSOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code. \n\n\nThe dataset contains of the following columns. \n* post_id - Twitter ID\n* text - Post text\n* tokens - Tokenised text. Each token is seperated by a space. \n* rationals - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.\n* label - Sentence-level label, offensive or not-offensive. \n\n!Alt text\n\nSemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code. \n\n\nThe dataset contains following columns \n* post_id - Twitter ID\n* text - Post text\n\nFurthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm",
"## Experiments\nClone the repository and install the libraries using the following command (preferably inside a conda environment)\n\n~~~\npip install -r URL\n~~~",
"### Sentence-level\nSentence-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_deepoffense\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hi, en or si).\n * hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).\n * en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).\n * si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~\n\nSentence-level CNN and LSTM based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_offensive_nn\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the architecture (cnn2D, lstm).\n--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.\n--augment : Perform semi supervised data augmentation.\n--std : Standard deviation of the models to cut down data augmentation.\n--augment_type: The type of the data augmentation.\n * off - Augment only the offensive instances.\n * normal - Augment both offensive and non-offensive instances.\n~~~",
"### Token-level\nToken-level transformer based experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_mudes\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n--transfer : Whether to perform transfer learning or not (true or false).\n--transfer_language : The initial language if transfer learning is performed (hatex or tsd).\n * hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).\n * tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).\n~~~\n\nToken-level LIME experiments can be executed using the following command. \n\n~~~\npython -m experiments.sentence_level.sinhala_lime\n~~~\n\nThe command takes the following arguments;\n\n~~~\n--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).\n--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.\n~~~",
"## Acknowledgments\nWe want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.\n\nIf you are using the dataset or the models please cite the following paper\n~~~\n@article{ranasinghe2022sold,\n title={SOLD: Sinhala Offensive Language Dataset},\n author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},\n journal={arXiv preprint arXiv:2212.00851},\n year={2022}\n}\n~~~"
] |
d2687bf97a010478ad55cdc6b17489d7bdda6158 | # AutoTrain Dataset for project: test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 1
},
{
"image": "<512x512 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=3, names=['man', 'other', 'woman'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 45 |
| valid | 13 |
| sirtolkien/autotrain-data-test | [
"task_categories:image-classification",
"doi:10.57967/hf/0090",
"region:us"
] | 2022-11-04T20:56:01+00:00 | {"task_categories": ["image-classification"]} | 2022-11-04T21:02:23+00:00 | [] | [] | TAGS
#task_categories-image-classification #doi-10.57967/hf/0090 #region-us
| AutoTrain Dataset for project: test
===================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #doi-10.57967/hf/0090 #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
8d08878020856ee2a2e28f5624c8c684ee84b2ea | # Dataset Card for Multilingual Sarcasm Detection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
## Dataset Description
- Repository: https://github.com/helinivan/multilingual-sarcasm-detector
### Dataset Summary
Dataset consists of news article headlines in Dutch, English and Italian. The news article headlines are both from actual news sources and sarcastic/satirical newspapers. The news article is determined sarcastic/non-sarcastic based on the news article source.
The sources of news articles are:
- The Huffington Post (en, non-sarcastic)
- The Onion (en, sarcastic)
- NOS (nl, non-sarcastic)
- De Speld (nl, sarcastic)
- Il Giornale (it, non-sarcastic)
- Lercio (it, sarcastic)
### Languages
`en`, `nl`, `it`
## Dataset Structure
### Data Instances
- total_length: 67,480
- sarcastic: 25,609
- non_sarcastic: 41,817
- english: 22,837
- dutch: 20,771
- italian: 23,871
### Data Fields
- article_url: str
- article_title: str
- is_sarcastic: int
- lang: str
- title_length: int
## Dataset Creation
### Source Data
- Selected all English news article titles from this Kaggle dataset: https://www.kaggle.com/datasets/rmisra/news-headlines-dataset-for-sarcasm-detection
- Randomly selected 15k Dutch non-sarcastic news article titles from this Kaggle dataset: https://www.kaggle.com/datasets/maxscheijen/dutch-news-articles
Rest of the data is scraped directly from the newspapers. | helinivan/sarcasm_headlines_multilingual | [
"region:us"
] | 2022-11-04T22:23:03+00:00 | {} | 2022-12-04T18:56:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for Multilingual Sarcasm Detection
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Curation Rationale
- Source Data
## Dataset Description
- Repository: URL
### Dataset Summary
Dataset consists of news article headlines in Dutch, English and Italian. The news article headlines are both from actual news sources and sarcastic/satirical newspapers. The news article is determined sarcastic/non-sarcastic based on the news article source.
The sources of news articles are:
- The Huffington Post (en, non-sarcastic)
- The Onion (en, sarcastic)
- NOS (nl, non-sarcastic)
- De Speld (nl, sarcastic)
- Il Giornale (it, non-sarcastic)
- Lercio (it, sarcastic)
### Languages
'en', 'nl', 'it'
## Dataset Structure
### Data Instances
- total_length: 67,480
- sarcastic: 25,609
- non_sarcastic: 41,817
- english: 22,837
- dutch: 20,771
- italian: 23,871
### Data Fields
- article_url: str
- article_title: str
- is_sarcastic: int
- lang: str
- title_length: int
## Dataset Creation
### Source Data
- Selected all English news article titles from this Kaggle dataset: URL
- Randomly selected 15k Dutch non-sarcastic news article titles from this Kaggle dataset: URL
Rest of the data is scraped directly from the newspapers. | [
"# Dataset Card for Multilingual Sarcasm Detection",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nDataset consists of news article headlines in Dutch, English and Italian. The news article headlines are both from actual news sources and sarcastic/satirical newspapers. The news article is determined sarcastic/non-sarcastic based on the news article source.\n\nThe sources of news articles are:\n- The Huffington Post (en, non-sarcastic)\n- The Onion (en, sarcastic)\n- NOS (nl, non-sarcastic)\n- De Speld (nl, sarcastic)\n- Il Giornale (it, non-sarcastic)\n- Lercio (it, sarcastic)",
"### Languages\n\n'en', 'nl', 'it'",
"## Dataset Structure",
"### Data Instances\n\n- total_length: 67,480\n- sarcastic: 25,609\n- non_sarcastic: 41,817\n- english: 22,837\n- dutch: 20,771\n- italian: 23,871",
"### Data Fields\n\n- article_url: str\n- article_title: str\n- is_sarcastic: int\n- lang: str\n- title_length: int",
"## Dataset Creation",
"### Source Data\n\n- Selected all English news article titles from this Kaggle dataset: URL\n- Randomly selected 15k Dutch non-sarcastic news article titles from this Kaggle dataset: URL\n\nRest of the data is scraped directly from the newspapers."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Multilingual Sarcasm Detection",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nDataset consists of news article headlines in Dutch, English and Italian. The news article headlines are both from actual news sources and sarcastic/satirical newspapers. The news article is determined sarcastic/non-sarcastic based on the news article source.\n\nThe sources of news articles are:\n- The Huffington Post (en, non-sarcastic)\n- The Onion (en, sarcastic)\n- NOS (nl, non-sarcastic)\n- De Speld (nl, sarcastic)\n- Il Giornale (it, non-sarcastic)\n- Lercio (it, sarcastic)",
"### Languages\n\n'en', 'nl', 'it'",
"## Dataset Structure",
"### Data Instances\n\n- total_length: 67,480\n- sarcastic: 25,609\n- non_sarcastic: 41,817\n- english: 22,837\n- dutch: 20,771\n- italian: 23,871",
"### Data Fields\n\n- article_url: str\n- article_title: str\n- is_sarcastic: int\n- lang: str\n- title_length: int",
"## Dataset Creation",
"### Source Data\n\n- Selected all English news article titles from this Kaggle dataset: URL\n- Randomly selected 15k Dutch non-sarcastic news article titles from this Kaggle dataset: URL\n\nRest of the data is scraped directly from the newspapers."
] |
4192bf0f29316c0ed081510171b83a71883f1eaa | # Dataset Card for "dummy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/dummy | [
"region:us"
] | 2022-11-04T22:28:56+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "age", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "female", "1": "male"}}}}], "splits": [{"name": "train", "num_bytes": 50, "num_examples": 2}], "download_size": 1182, "dataset_size": 50}} | 2022-11-29T15:57:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dummy"
More Information needed | [
"# Dataset Card for \"dummy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dummy\"\n\nMore Information needed"
] |
31d3a08d5af6c0eb87e822ae146b14955d8453e0 |
# Landscape Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Two different Versions:
### Version 1:
File: ```land_style```
To use it in a prompt: ```"art by land_style"```
For best use write something like ```highly detailed background art by land_style```
### Version 2:
File: ```landscape_style```
To use it in a prompt: ```"art by landscape_style"```
For best use write something like ```highly detailed background art by landscape_style```
If it is to strong just add [] around it.
Trained until 7000 steps
Have fun :)
## Example Pictures
<img src=https://i.imgur.com/UjoXFkJ.png width=100% height=100%/>
<img src=https://i.imgur.com/rAoEyLK.png width=100% height=100%/>
<img src=https://i.imgur.com/SpPsc7i.png width=100% height=100%/>
<img src=https://i.imgur.com/zMH0EeI.png width=100% height=100%/>
<img src=https://i.imgur.com/iQe0Jxc.png width=100% height=100%/>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/land_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-04T22:56:47+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-12T14:42:39+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
|
# Landscape Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Two different Versions:
### Version 1:
File:
To use it in a prompt:
For best use write something like
### Version 2:
File:
To use it in a prompt:
For best use write something like
If it is to strong just add [] around it.
Trained until 7000 steps
Have fun :)
## Example Pictures
<img src=https://i.URL width=100% height=100%/>
<img src=https://i.URL width=100% height=100%/>
<img src=https://i.URL width=100% height=100%/>
<img src=https://i.URL width=100% height=100%/>
<img src=https://i.URL width=100% height=100%/>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here | [
"# Landscape Style Embedding / Textual Inversion",
"## Usage\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTwo different Versions:",
"### Version 1:\n\nFile: \n\nTo use it in a prompt: \n\nFor best use write something like",
"### Version 2:\n\nFile: \n\nTo use it in a prompt: \n\nFor best use write something like \n\nIf it is to strong just add [] around it.\n\nTrained until 7000 steps\n\nHave fun :)",
"## Example Pictures\n\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n",
"# Landscape Style Embedding / Textual Inversion",
"## Usage\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTwo different Versions:",
"### Version 1:\n\nFile: \n\nTo use it in a prompt: \n\nFor best use write something like",
"### Version 2:\n\nFile: \n\nTo use it in a prompt: \n\nFor best use write something like \n\nIf it is to strong just add [] around it.\n\nTrained until 7000 steps\n\nHave fun :)",
"## Example Pictures\n\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>\n<img src=https://i.URL width=100% height=100%/>",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] |
55f1c09dcca698cd7015ff37b35ee2e136df6797 | # Dataset Card for "Romance-baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Romance-baseline | [
"region:us"
] | 2022-11-05T01:05:26+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39176840.7, "num_examples": 1105002}, {"name": "validation", "num_bytes": 4352982.3, "num_examples": 122778}], "download_size": 23278822, "dataset_size": 43529823.0}} | 2022-11-05T01:05:46+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Romance-baseline"
More Information needed | [
"# Dataset Card for \"Romance-baseline\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Romance-baseline\"\n\nMore Information needed"
] |
7b8b77e8fdeb334e3550d1fb6167d4cc92dc6957 |
# Dataset Card for "lmqg/qa_squadshifts"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2004.14444](https://arxiv.org/abs/2004.14444)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is SQuADShifts dataset with custom split of training/validation/test following [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
| name |train | valid | test |
|-------------|------:|------:|-----:|
|default (all)|9209|6283 |18,844|
| amazon |3295|1648|4942|
| new_wiki |2646|1323|3969|
| nyt |3355|1678|5032|
| reddit |3268|1634|4901|
## Citation Information
```
@inproceedings{miller2020effect,
title={The effect of natural distribution shift on question answering models},
author={Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig},
booktitle={International Conference on Machine Learning},
pages={6905--6916},
year={2020},
organization={PMLR}
}
``` | lmqg/qa_squadshifts | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:1k<n<10k",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2004.14444",
"region:us"
] | 2022-11-05T02:43:19+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10k", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "SQuADShifts"} | 2022-11-05T05:10:26+00:00 | [
"2004.14444"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-1k<n<10k #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #arxiv-2004.14444 #region-us
| Dataset Card for "lmqg/qa\_squadshifts"
=======================================
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Point of Contact: Asahi Ushio
### Dataset Summary
This is SQuADShifts dataset with custom split of training/validation/test following lmqg/qg\_squadshifts.
### Supported Tasks and Leaderboards
* 'question-answering'
### Languages
English (en)
Dataset Structure
-----------------
### Data Fields
The data fields are the same among all splits.
#### plain\_text
* 'id': a 'string' feature of id
* 'title': a 'string' feature of title of the paragraph
* 'context': a 'string' feature of paragraph
* 'question': a 'string' feature of question
* 'answers': a 'json' feature of answers
### Data Splits
| [
"### Dataset Summary\n\n\nThis is SQuADShifts dataset with custom split of training/validation/test following lmqg/qg\\_squadshifts.",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answering'",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature of id\n* 'title': a 'string' feature of title of the paragraph\n* 'context': a 'string' feature of paragraph\n* 'question': a 'string' feature of question\n* 'answers': a 'json' feature of answers",
"### Data Splits"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-1k<n<10k #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #arxiv-2004.14444 #region-us \n",
"### Dataset Summary\n\n\nThis is SQuADShifts dataset with custom split of training/validation/test following lmqg/qg\\_squadshifts.",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answering'",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature of id\n* 'title': a 'string' feature of title of the paragraph\n* 'context': a 'string' feature of paragraph\n* 'question': a 'string' feature of question\n* 'answers': a 'json' feature of answers",
"### Data Splits"
] |
6f41e1fff033457ae09c882a845a548a1c99ddba | # Dataset Card for "winobias"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | henryscheible/winobias | [
"region:us"
] | 2022-11-05T05:11:18+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "eval", "num_bytes": 230400, "num_examples": 1584}, {"name": "train", "num_bytes": 226080, "num_examples": 1584}], "download_size": 83948, "dataset_size": 456480}} | 2022-11-05T05:11:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "winobias"
More Information needed | [
"# Dataset Card for \"winobias\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"winobias\"\n\nMore Information needed"
] |
3441c9e1f9d053e02e451d65b5e9cbd91759b6c6 | # Dataset Card for "diffusiondb_random_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | svjack/diffusiondb_random_10k | [
"region:us"
] | 2022-11-05T06:06:24+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "int64"}, {"name": "step", "dtype": "int64"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6221323762.0, "num_examples": 10000}], "download_size": 5912620994, "dataset_size": 6221323762.0}} | 2022-11-05T06:42:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "diffusiondb_random_10k"
More Information needed | [
"# Dataset Card for \"diffusiondb_random_10k\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"diffusiondb_random_10k\"\n\nMore Information needed"
] |
f5e692026a34569c12e41c76f8d454fd9656f041 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966288 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T09:05:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-05T09:08:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @anchal for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] |
0d4919bac6e97e65c5770de6df0c068c6668c1a8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: abhilash1910/albert-squad-v2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966289 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T09:06:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "abhilash1910/albert-squad-v2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-05T09:10:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: abhilash1910/albert-squad-v2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @anchal for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: abhilash1910/albert-squad-v2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: abhilash1910/albert-squad-v2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] |
7d1d7bfc1ce0bc6e4232a162fa62f4bd9fac84aa | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-cased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966290 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T09:06:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-cased-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-05T09:09:12+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-cased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @anchal for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-cased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-cased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] |
d3977836565f67db67cf3c73acff318889fe1fb8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966291 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T09:06:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-05T09:09:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @anchal for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] |
7ea37d0dd1563d17ca76bbbd94870d0c2ecae6d0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: distilbert-base-cased-distilled-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966292 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T09:06:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "distilbert-base-cased-distilled-squad", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-05T09:08:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: distilbert-base-cased-distilled-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @anchal for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: distilbert-base-cased-distilled-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: distilbert-base-cased-distilled-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] |
5910f37a9ea67db63f742fab701c7f58fa9f2878 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966293 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T09:06:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-05T09:09:32+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @anchal for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @anchal for evaluating this model."
] |
66a8056b617eaed85e83fe96b678b7219229ff03 | # Dataset Card for "eclassCorpus"
This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics. | JoBeer/eclassCorpus | [
"region:us"
] | 2022-11-05T11:10:39+00:00 | {"dataset_info": {"features": [{"name": "did", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "datatype", "dtype": "string"}, {"name": "unit", "dtype": "string"}, {"name": "IRDI", "dtype": "string"}, {"name": "metalabel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 137123, "num_examples": 672}], "download_size": 48203, "dataset_size": 137123}} | 2023-01-07T12:35:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "eclassCorpus"
This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics. | [
"# Dataset Card for \"eclassCorpus\"\n\nThis Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"eclassCorpus\"\n\nThis Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics."
] |
c5883bfc76c2bc55ea74fede5d7b5271424b0e32 | # Dataset Card for "eclassQuery"
This Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics. | JoBeer/eclassQuery | [
"task_categories:sentence-similarity",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2022-11-05T11:14:01+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "did", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "duplicate_id", "dtype": "int64"}, {"name": "metalabel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 147176, "num_examples": 1040}, {"name": "eval", "num_bytes": 100846, "num_examples": 671}], "download_size": 113268, "dataset_size": 248022}} | 2023-01-07T12:34:03+00:00 | [] | [
"en"
] | TAGS
#task_categories-sentence-similarity #size_categories-1K<n<10K #language-English #region-us
| # Dataset Card for "eclassQuery"
This Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics. | [
"# Dataset Card for \"eclassQuery\"\n\nThis Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics."
] | [
"TAGS\n#task_categories-sentence-similarity #size_categories-1K<n<10K #language-English #region-us \n",
"# Dataset Card for \"eclassQuery\"\n\nThis Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics."
] |
b0f8f64e6d681f84caa925de86b77e2a61f47903 | # Dataset Card for "farsidecomics-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | maderix/farsidecomics-blip-captions | [
"region:us"
] | 2022-11-05T11:29:45+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37767218.0, "num_examples": 354}], "download_size": 37175120, "dataset_size": 37767218.0}} | 2022-11-05T11:29:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "farsidecomics-blip-captions"
More Information needed | [
"# Dataset Card for \"farsidecomics-blip-captions\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"farsidecomics-blip-captions\"\n\nMore Information needed"
] |
0e804efcc3d6ef4934e925e9ffc7d73f8d33f194 | # Dataset Card for "diffusiondb_random_10k_zh_v1"
svjack/diffusiondb_random_10k_zh_v1 is a dataset that random sample 10k English samples from [diffusiondb](https://github.com/poloclub/diffusiondb) and use [NMT](https://en.wikipedia.org/wiki/Neural_machine_translation) translate them into Chinese with some corrections.<br/>
it used to train stable diffusion models in <br/> [svjack/Stable-Diffusion-FineTuned-zh-v0](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v0)<br/>
[svjack/Stable-Diffusion-FineTuned-zh-v1](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v1)<br/>
[svjack/Stable-Diffusion-FineTuned-zh-v2](https://huggingface.co/svjack/Stable-Diffusion-FineTuned-zh-v2)<br/>
And is the data support of [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend) which is a fine tune version of Stable Diffusion model on self-translate 10k diffusiondb Chinese Corpus and "extend" it.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | svjack/diffusiondb_random_10k_zh_v1 | [
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:10K",
"language:en",
"language:zh",
"region:us"
] | 2022-11-05T12:02:32+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en", "zh"], "multilinguality": ["multilingual"], "size_categories": ["10K"], "pretty_name": "Pok\u00e9mon BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "int64"}, {"name": "step", "dtype": "int64"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}, {"name": "zh_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5826763233.4353, "num_examples": 9841}], "download_size": 5829710525, "dataset_size": 5826763233.4353}} | 2022-11-08T04:08:23+00:00 | [] | [
"en",
"zh"
] | TAGS
#annotations_creators-machine-generated #language_creators-other #multilinguality-multilingual #size_categories-10K #language-English #language-Chinese #region-us
| # Dataset Card for "diffusiondb_random_10k_zh_v1"
svjack/diffusiondb_random_10k_zh_v1 is a dataset that random sample 10k English samples from diffusiondb and use NMT translate them into Chinese with some corrections.<br/>
it used to train stable diffusion models in <br/> svjack/Stable-Diffusion-FineTuned-zh-v0<br/>
svjack/Stable-Diffusion-FineTuned-zh-v1<br/>
svjack/Stable-Diffusion-FineTuned-zh-v2<br/>
And is the data support of URL which is a fine tune version of Stable Diffusion model on self-translate 10k diffusiondb Chinese Corpus and "extend" it.
More Information needed | [
"# Dataset Card for \"diffusiondb_random_10k_zh_v1\"\n\nsvjack/diffusiondb_random_10k_zh_v1 is a dataset that random sample 10k English samples from diffusiondb and use NMT translate them into Chinese with some corrections.<br/>\n\nit used to train stable diffusion models in <br/> svjack/Stable-Diffusion-FineTuned-zh-v0<br/>\nsvjack/Stable-Diffusion-FineTuned-zh-v1<br/>\nsvjack/Stable-Diffusion-FineTuned-zh-v2<br/>\n\nAnd is the data support of URL which is a fine tune version of Stable Diffusion model on self-translate 10k diffusiondb Chinese Corpus and \"extend\" it.\n\n\nMore Information needed"
] | [
"TAGS\n#annotations_creators-machine-generated #language_creators-other #multilinguality-multilingual #size_categories-10K #language-English #language-Chinese #region-us \n",
"# Dataset Card for \"diffusiondb_random_10k_zh_v1\"\n\nsvjack/diffusiondb_random_10k_zh_v1 is a dataset that random sample 10k English samples from diffusiondb and use NMT translate them into Chinese with some corrections.<br/>\n\nit used to train stable diffusion models in <br/> svjack/Stable-Diffusion-FineTuned-zh-v0<br/>\nsvjack/Stable-Diffusion-FineTuned-zh-v1<br/>\nsvjack/Stable-Diffusion-FineTuned-zh-v2<br/>\n\nAnd is the data support of URL which is a fine tune version of Stable Diffusion model on self-translate 10k diffusiondb Chinese Corpus and \"extend\" it.\n\n\nMore Information needed"
] |
ced75dce72ba1810bd050272470b07b1db519ebc | # Dataset Card for "gal_yair_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_8300_1664x832 | [
"region:us"
] | 2022-11-05T14:04:29+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1502268207.4, "num_examples": 8300}], "download_size": 1410808567, "dataset_size": 1502268207.4}} | 2022-11-05T14:54:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "gal_yair_new"
More Information needed | [
"# Dataset Card for \"gal_yair_new\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"gal_yair_new\"\n\nMore Information needed"
] |
c945b082ca08d0a8f3ba227fb78404a09614c36e | # Dataset Card for "counterfact-tracing"
This is adapted from the counterfact dataset from the excellent [ROME paper](https://rome.baulab.info/) from David Bau and Kevin Meng.
This is a dataset of 21919 factual relations, formatted as `data["prompt"]==f"{data['relation_prefix']}{data['subject']}{data['relation_suffix']}"`. Each has two responses `data["target_true"]` and `data["target_false"]` which is intended to go immediately after the prompt.
The dataset was originally designed for memory editing in models. I made this for a research project doing mechanistic interpretability of how models recall factual knowledge, building on their causal tracing technique, and so stripped their data down to the information relevant to causal tracing. I also prepended spaces where relevant so that the subject and targets can be properly tokenized as is (spaces are always prepended to targets, and are prepended to subjects unless the subject is at the start of a sentence).
Each fact has both a true and false target. I recommend measuring the logit *difference* between the true and false target (at least, if it's a single token target!), so as to control for eg the parts of the model which identify that it's supposed to be giving a fact of this type at all. (Idea inspired by the excellent [Interpretability In the Wild](https://arxiv.org/abs/2211.00593) paper). | NeelNanda/counterfact-tracing | [
"arxiv:2211.00593",
"region:us"
] | 2022-11-05T15:09:51+00:00 | {"dataset_info": {"features": [{"name": "relation", "dtype": "string"}, {"name": "relation_prefix", "dtype": "string"}, {"name": "relation_suffix", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "relation_id", "dtype": "string"}, {"name": "target_false_id", "dtype": "string"}, {"name": "target_true_id", "dtype": "string"}, {"name": "target_true", "dtype": "string"}, {"name": "target_false", "dtype": "string"}, {"name": "subject", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3400668, "num_examples": 21919}], "download_size": 1109314, "dataset_size": 3400668}} | 2022-11-05T15:19:43+00:00 | [
"2211.00593"
] | [] | TAGS
#arxiv-2211.00593 #region-us
| # Dataset Card for "counterfact-tracing"
This is adapted from the counterfact dataset from the excellent ROME paper from David Bau and Kevin Meng.
This is a dataset of 21919 factual relations, formatted as 'data["prompt"]==f"{data['relation_prefix']}{data['subject']}{data['relation_suffix']}"'. Each has two responses 'data["target_true"]' and 'data["target_false"]' which is intended to go immediately after the prompt.
The dataset was originally designed for memory editing in models. I made this for a research project doing mechanistic interpretability of how models recall factual knowledge, building on their causal tracing technique, and so stripped their data down to the information relevant to causal tracing. I also prepended spaces where relevant so that the subject and targets can be properly tokenized as is (spaces are always prepended to targets, and are prepended to subjects unless the subject is at the start of a sentence).
Each fact has both a true and false target. I recommend measuring the logit *difference* between the true and false target (at least, if it's a single token target!), so as to control for eg the parts of the model which identify that it's supposed to be giving a fact of this type at all. (Idea inspired by the excellent Interpretability In the Wild paper). | [
"# Dataset Card for \"counterfact-tracing\"\n\nThis is adapted from the counterfact dataset from the excellent ROME paper from David Bau and Kevin Meng.\n\nThis is a dataset of 21919 factual relations, formatted as 'data[\"prompt\"]==f\"{data['relation_prefix']}{data['subject']}{data['relation_suffix']}\"'. Each has two responses 'data[\"target_true\"]' and 'data[\"target_false\"]' which is intended to go immediately after the prompt.\n\nThe dataset was originally designed for memory editing in models. I made this for a research project doing mechanistic interpretability of how models recall factual knowledge, building on their causal tracing technique, and so stripped their data down to the information relevant to causal tracing. I also prepended spaces where relevant so that the subject and targets can be properly tokenized as is (spaces are always prepended to targets, and are prepended to subjects unless the subject is at the start of a sentence). \n\nEach fact has both a true and false target. I recommend measuring the logit *difference* between the true and false target (at least, if it's a single token target!), so as to control for eg the parts of the model which identify that it's supposed to be giving a fact of this type at all. (Idea inspired by the excellent Interpretability In the Wild paper)."
] | [
"TAGS\n#arxiv-2211.00593 #region-us \n",
"# Dataset Card for \"counterfact-tracing\"\n\nThis is adapted from the counterfact dataset from the excellent ROME paper from David Bau and Kevin Meng.\n\nThis is a dataset of 21919 factual relations, formatted as 'data[\"prompt\"]==f\"{data['relation_prefix']}{data['subject']}{data['relation_suffix']}\"'. Each has two responses 'data[\"target_true\"]' and 'data[\"target_false\"]' which is intended to go immediately after the prompt.\n\nThe dataset was originally designed for memory editing in models. I made this for a research project doing mechanistic interpretability of how models recall factual knowledge, building on their causal tracing technique, and so stripped their data down to the information relevant to causal tracing. I also prepended spaces where relevant so that the subject and targets can be properly tokenized as is (spaces are always prepended to targets, and are prepended to subjects unless the subject is at the start of a sentence). \n\nEach fact has both a true and false target. I recommend measuring the logit *difference* between the true and false target (at least, if it's a single token target!), so as to control for eg the parts of the model which identify that it's supposed to be giving a fact of this type at all. (Idea inspired by the excellent Interpretability In the Wild paper)."
] |
9ce26cfd13b8a40a09229eb582d654bf774c11cb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: w11wo/indonesian-roberta-base-indonli
* Dataset: indonli
* Config: indonli
* Split: test_expert
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@afaji](https://huggingface.co/afaji) for evaluating this model. | autoevaluate/autoeval-eval-indonli-indonli-717ea6-1995866375 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-05T18:25:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["indonli"], "eval_info": {"task": "natural_language_inference", "model": "w11wo/indonesian-roberta-base-indonli", "metrics": [], "dataset_name": "indonli", "dataset_config": "indonli", "dataset_split": "test_expert", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2022-11-05T18:26:33+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: w11wo/indonesian-roberta-base-indonli
* Dataset: indonli
* Config: indonli
* Split: test_expert
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @afaji for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: w11wo/indonesian-roberta-base-indonli\n* Dataset: indonli\n* Config: indonli\n* Split: test_expert\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @afaji for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: w11wo/indonesian-roberta-base-indonli\n* Dataset: indonli\n* Config: indonli\n* Split: test_expert\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @afaji for evaluating this model."
] |
357bc4f6af754b70dfbb6ced6f48e9728baa8e0d |
# Dataset Card for BIOSSES
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Repository:** https://github.com/gizemsogancioglu/biosses
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
- **Point of Contact:** [Gizem Soğancıoğlu]([email protected]) and [Arzucan Özgür]([email protected])
### Dataset Summary
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
- very strong: 0.80–1.00
- strong: 0.60–0.79
- moderate: 0.40–0.59
- weak: 0.20–0.39
- very weak: 0.00–0.19
### Data Splits (From BLUE Benchmark)
|name|Train|Dev|Test|
|:--:|:--:|:--:|:--:|
|biosses|64|16|20|
### Supported Tasks and Leaderboards
Biomedical Semantic Similarity Scoring.
### Languages
English.
## Dataset Structure
### Data Instances
For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).
```json
{
"id": "0",
"sentence1": "Centrosomes increase both in size and in microtubule-nucleating capacity just before mitotic entry.",
"sentence2": "Functional studies showed that, when introduced into cell lines, miR-146a was found to promote cell proliferation in cervical cancer cells, which suggests that miR-146a works as an oncogenic miRNA in these cancers.",
"score": 0.0
}
```
### Data Fields
- `sentence 1`: string
- `sentence 2`: string
- `score`: float ranging from 0 (no relation) to 4 (equivalent)
## Dataset Creation
### Curation Rationale
### Source Data
The [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/).
### Annotations
#### Annotation process
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.
| |Correlation r |
|----------:|--------------:|
|Annotator A| 0.952|
|Annotator B| 0.958|
|Annotator C| 0.917|
|Annotator D| 0.902|
|Annotator E| 0.941|
## Additional Information
### Dataset Curators
- Gizem Soğancıoğlu, [email protected]
- Hakime Öztürk, [email protected]
- Arzucan Özgür, [email protected]
Bogazici University, Istanbul, Turkey
### Licensing Information
BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
### Citation Information
```bibtex
@article{10.1093/bioinformatics/btx238,
author = {Soğancıoğlu, Gizem and Öztürk, Hakime and Özgür, Arzucan},
title = "{BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}",
journal = {Bioinformatics},
volume = {33},
number = {14},
pages = {i49-i58},
year = {2017},
month = {07},
abstract = "{The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6\\% in terms of the Pearson correlation metric.A web-based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/.}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btx238},
url = {https://doi.org/10.1093/bioinformatics/btx238},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/33/14/i49/25157316/btx238.pdf},
}
```
### Contributions
Thanks to [@qanastek](https://github.com/qanastek) for adding this dataset.
| qanastek/Biosses-BLUE | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:gpl-3.0",
"region:us"
] | 2022-11-05T19:27:31+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring"], "paperswithcode_id": "biosses", "pretty_name": "BIOSSES", "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 32783, "num_examples": 100}], "download_size": 36324, "dataset_size": 32783}} | 2022-11-05T23:23:58+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-gpl-3.0 #region-us
| Dataset Card for BIOSSES
========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: BIOSSES: a semantic sentence similarity estimation system for the biomedical domain
* Point of Contact: Gizem Soğancıoğlu and Arzucan Özgür
### Dataset Summary
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
* very strong: 0.80–1.00
* strong: 0.60–0.79
* moderate: 0.40–0.59
* weak: 0.20–0.39
* very weak: 0.00–0.19
### Data Splits (From BLUE Benchmark)
### Supported Tasks and Leaderboards
Biomedical Semantic Similarity Scoring.
### Languages
English.
Dataset Structure
-----------------
### Data Instances
For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).
### Data Fields
* 'sentence 1': string
* 'sentence 2': string
* 'score': float ranging from 0 (no relation) to 4 (equivalent)
Dataset Creation
----------------
### Curation Rationale
### Source Data
The TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset.
### Annotations
#### Annotation process
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.
Additional Information
----------------------
### Dataset Curators
* Gizem Soğancıoğlu, gizemsogancioglu@URL
* Hakime Öztürk, URL@URL
* Arzucan Özgür, gizemsogancioglu@URL
Bogazici University, Istanbul, Turkey
### Licensing Information
BIOSSES is made available under the terms of The GNU Common Public License v.3.0.
### Contributions
Thanks to @qanastek for adding this dataset.
| [
"### Dataset Summary\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19",
"### Data Splits (From BLUE Benchmark)",
"### Supported Tasks and Leaderboards\n\n\nBiomedical Semantic Similarity Scoring.",
"### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).",
"### Data Fields\n\n\n* 'sentence 1': string\n* 'sentence 2': string\n* 'score': float ranging from 0 (no relation) to 4 (equivalent)\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data\n\n\nThe TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset.",
"### Annotations",
"#### Annotation process\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n\n\nThe table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.\n\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Gizem Soğancıoğlu, gizemsogancioglu@URL\n* Hakime Öztürk, URL@URL\n* Arzucan Özgür, gizemsogancioglu@URL\nBogazici University, Istanbul, Turkey",
"### Licensing Information\n\n\nBIOSSES is made available under the terms of The GNU Common Public License v.3.0.",
"### Contributions\n\n\nThanks to @qanastek for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-gpl-3.0 #region-us \n",
"### Dataset Summary\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19",
"### Data Splits (From BLUE Benchmark)",
"### Supported Tasks and Leaderboards\n\n\nBiomedical Semantic Similarity Scoring.",
"### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).",
"### Data Fields\n\n\n* 'sentence 1': string\n* 'sentence 2': string\n* 'score': float ranging from 0 (no relation) to 4 (equivalent)\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data\n\n\nThe TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset.",
"### Annotations",
"#### Annotation process\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n\n\nThe table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.\n\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Gizem Soğancıoğlu, gizemsogancioglu@URL\n* Hakime Öztürk, URL@URL\n* Arzucan Özgür, gizemsogancioglu@URL\nBogazici University, Istanbul, Turkey",
"### Licensing Information\n\n\nBIOSSES is made available under the terms of The GNU Common Public License v.3.0.",
"### Contributions\n\n\nThanks to @qanastek for adding this dataset."
] |
e9bad8693d5b42ddab7e1c15f2b5524680c5efb2 | `duality_style, art by duality_style` this will give a monochrome, wings/feathers, flowers, and opposite reflection look.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here | flamesbob/Duality_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-05T20:34:29+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-05T20:36:53+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| 'duality_style, art by duality_style' this will give a monochrome, wings/feathers, flowers, and opposite reflection look.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here | [] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n"
] |
545e82b4d2819a24aae1ff54048ecf98b7b28231 | # Dataset Card for "ade20k-panoptic-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nielsr/ade20k-panoptic-demo | [
"region:us"
] | 2022-11-05T21:16:00+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}, {"name": "segments_info", "list": [{"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 492746.0, "num_examples": 10}, {"name": "validation", "num_bytes": 461402.0, "num_examples": 10}], "download_size": 949392, "dataset_size": 954148.0}} | 2022-11-06T17:13:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ade20k-panoptic-demo"
More Information needed | [
"# Dataset Card for \"ade20k-panoptic-demo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ade20k-panoptic-demo\"\n\nMore Information needed"
] |
8bdd59805ec01cc3920d42a7633083e4dea28265 |
# Lands Between Elden Ring Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Two different Versions:
### Version 1:
File: ```lands_between```
To use it in a prompt: ```"art by lands_between"```
For best use write something like ```highly detailed background art by lands_between```
### Version 2:
File: ```elden_ring```
To use it in a prompt: ```"art by elden_ring"```
For best use write something like ```highly detailed background art by elden_ring```
If it is to strong just add [] around it.
Trained until 7000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Pajrsvy.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Bly3NJi.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/IxLNgB6.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/6rJ5ppD.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ueTEHtb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/dlVIwXs.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/Elden_Ring_Embeddings | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-05T21:27:46+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-12T15:02:39+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Lands Between Elden Ring Embedding / Textual Inversion
======================================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Two different Versions:
### Version 1:
File:
To use it in a prompt:
For best use write something like
### Version 2:
File:
To use it in a prompt:
For best use write something like
If it is to strong just add [] around it.
Trained until 7000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [
"### Version 1:\n\n\nFile:\n\n\nTo use it in a prompt:\n\n\nFor best use write something like",
"### Version 2:\n\n\nFile:\n\n\nTo use it in a prompt:\n\n\nFor best use write something like\n\n\nIf it is to strong just add [] around it.\n\n\nTrained until 7000 steps\n\n\nHave fun :)\n\n\nExample Pictures\n----------------\n\n\n\n\n\n\nLicense\n-------\n\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies:\n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n",
"### Version 1:\n\n\nFile:\n\n\nTo use it in a prompt:\n\n\nFor best use write something like",
"### Version 2:\n\n\nFile:\n\n\nTo use it in a prompt:\n\n\nFor best use write something like\n\n\nIf it is to strong just add [] around it.\n\n\nTrained until 7000 steps\n\n\nHave fun :)\n\n\nExample Pictures\n----------------\n\n\n\n\n\n\nLicense\n-------\n\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies:\n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.