sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
8e8a05ab1ad3005e3a2f0242377d15b0aa4fada0 |
# Slyvanie Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by slyvanie_style"```
If it is to strong just add [] around it.
This embedding was trained to 14500 steps.
If you'd like to support the amazing artist whose artwork contributed to this embedding's training, I'd highly recommend you check out slyvanie [here](https://www.deviantart.com/slyvanie), [here](https://www.artstation.com/slyvanie) and [here](https://slyvanie.weebly.com/).
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/0PaBO0M.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/XpdAIdo.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3TuxD9L.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/jsYluEQ.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/H9XScnZ.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | grullborg/slyvanie_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T02:13:44+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-27T02:42:32+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Slyvanie Style Embedding / Textual Inversion
============================================
Usage
-----
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
This embedding was trained to 14500 steps.
If you'd like to support the amazing artist whose artwork contributed to this embedding's training, I'd highly recommend you check out slyvanie here, here and here.
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
d52a3cb0779c7f33f85566d48737fa380d206769 |
This dataset contains 5 second clips of birdcalls for audio generation tests.
There are 20 species represented, with ~500 recordings each. Recordings are from xeno-canto.
These clips were taken from longer samples by identifying calls within the recordings using the approach shown here: https://www.kaggle.com/code/johnowhitaker/peak-identification
The audio is represented at 32kHz (mono) | tglcourse/5s_birdcall_samples_top20 | [
"license:unknown",
"region:us"
] | 2022-10-27T06:26:02+00:00 | {"license": ["unknown"], "pretty_name": "5s Birdcall Samples"} | 2022-10-27T06:34:37+00:00 | [] | [] | TAGS
#license-unknown #region-us
|
This dataset contains 5 second clips of birdcalls for audio generation tests.
There are 20 species represented, with ~500 recordings each. Recordings are from xeno-canto.
These clips were taken from longer samples by identifying calls within the recordings using the approach shown here: URL
The audio is represented at 32kHz (mono) | [] | [
"TAGS\n#license-unknown #region-us \n"
] |
1904eb1374e46b71e86ae1940dbe01678df6c3c6 |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | quincyqiang/test | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"qa-nli",
"coreference-nli",
"paraphrase-identification",
"doi:10.57967/hf/0065",
"region:us"
] | 2022-10-27T07:07:57+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2022-10-27T07:17:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #qa-nli #coreference-nli #paraphrase-identification #doi-10.57967/hf/0065 #region-us
| Dataset Card for GLUE
=====================
Table of Contents
-----------------
* Dataset Card for GLUE
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Languages
+ Dataset Structure
- Data Instances
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Fields
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Splits
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Point of Contact:
* Size of downloaded dataset files: 955.33 MB
* Size of the generated dataset: 229.68 MB
* Total amount of disk used: 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli\_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli\_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 'en')
Dataset Structure
-----------------
### Data Instances
#### ax
* Size of downloaded dataset files: 0.21 MB
* Size of the generated dataset: 0.23 MB
* Total amount of disk used: 0.44 MB
An example of 'test' looks as follows.
#### cola
* Size of downloaded dataset files: 0.36 MB
* Size of the generated dataset: 0.58 MB
* Total amount of disk used: 0.94 MB
An example of 'train' looks as follows.
#### mnli
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 78.65 MB
* Total amount of disk used: 376.95 MB
An example of 'train' looks as follows.
#### mnli\_matched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.52 MB
* Total amount of disk used: 301.82 MB
An example of 'test' looks as follows.
#### mnli\_mismatched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.73 MB
* Total amount of disk used: 302.02 MB
An example of 'test' looks as follows.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Fields
The data fields are the same among all splits.
#### ax
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### cola
* 'sentence': a 'string' feature.
* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).
* 'idx': a 'int32' feature.
#### mnli
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_matched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_mismatched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Splits
#### ax
#### cola
#### mnli
#### mnli\_matched
#### mnli\_mismatched
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset.
| [
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #qa-nli #coreference-nli #paraphrase-identification #doi-10.57967/hf/0065 #region-us \n",
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
] |
0dbbdb7bc4eda0c61bcbc73049e8aa39ef30913b |
# Dataset Card for V4Design Europeana style dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
> 1614 paintings belonging to the categories Baroque, Rococo, and Other. The images were obtained using the Europeana Search API, selecting open objects from the art thematic collection. 24k images were obtained, from which the current dataset was derived. The labels were added by the V4Design team, using a custom annotation tool. As described in the project documentation, other categories were used besides Baroque and Rococo. But for the sake of training a machine learning model we have retained only the categories with a significant number of annotations [source](https://zenodo.org/record/4896487)
This version of the dataset is generated using the [CSV file](https://zenodo.org/record/4896487) hosted on Zenodo. This CSV file contains the labels with URLs for the relevant images. Some of these URLs no longer resolve to an image. For consitency with the original dataset and if these URLs become valid again, these rows of the data are preserved here. If you want only successfully loaded images in your dataset, you can filter out the missing images as follows.
```python
ds = ds.filter(lambda x: x['image'] is not None)
```
### Supported Tasks and Leaderboards
This dataset is primarily intended for `image-classification`.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@dataset{europeana_2021_4896487,
author = {Europeana and
V4Design},
title = {V4Design/Europeana style dataset},
month = jun,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.4896487},
url = {https://doi.org/10.5281/zenodo.4896487}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| biglam/v4design_europeana_style_dataset | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"license:other",
"region:us"
] | 2022-10-27T09:55:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "V4Design Europeana style dataset", "tags": [], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "uri", "dtype": "string"}, {"name": "style", "dtype": {"class_label": {"names": {"0": "Rococo", "1": "Baroque", "2": "Other"}}}}, {"name": "rights", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 536168550.923, "num_examples": 1613}], "download_size": 535393230, "dataset_size": 536168550.923}} | 2022-10-27T10:14:30+00:00 | [] | [] | TAGS
#task_categories-image-classification #annotations_creators-expert-generated #license-other #region-us
|
# Dataset Card for V4Design Europeana style dataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset contains:
> 1614 paintings belonging to the categories Baroque, Rococo, and Other. The images were obtained using the Europeana Search API, selecting open objects from the art thematic collection. 24k images were obtained, from which the current dataset was derived. The labels were added by the V4Design team, using a custom annotation tool. As described in the project documentation, other categories were used besides Baroque and Rococo. But for the sake of training a machine learning model we have retained only the categories with a significant number of annotations source
This version of the dataset is generated using the CSV file hosted on Zenodo. This CSV file contains the labels with URLs for the relevant images. Some of these URLs no longer resolve to an image. For consitency with the original dataset and if these URLs become valid again, these rows of the data are preserved here. If you want only successfully loaded images in your dataset, you can filter out the missing images as follows.
### Supported Tasks and Leaderboards
This dataset is primarily intended for 'image-classification'.
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @davanstrien for adding this dataset.
| [
"# Dataset Card for V4Design Europeana style dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset contains:\n> 1614 paintings belonging to the categories Baroque, Rococo, and Other. The images were obtained using the Europeana Search API, selecting open objects from the art thematic collection. 24k images were obtained, from which the current dataset was derived. The labels were added by the V4Design team, using a custom annotation tool. As described in the project documentation, other categories were used besides Baroque and Rococo. But for the sake of training a machine learning model we have retained only the categories with a significant number of annotations source\n\nThis version of the dataset is generated using the CSV file hosted on Zenodo. This CSV file contains the labels with URLs for the relevant images. Some of these URLs no longer resolve to an image. For consitency with the original dataset and if these URLs become valid again, these rows of the data are preserved here. If you want only successfully loaded images in your dataset, you can filter out the missing images as follows.",
"### Supported Tasks and Leaderboards\n\nThis dataset is primarily intended for 'image-classification'.",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @davanstrien for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #annotations_creators-expert-generated #license-other #region-us \n",
"# Dataset Card for V4Design Europeana style dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset contains:\n> 1614 paintings belonging to the categories Baroque, Rococo, and Other. The images were obtained using the Europeana Search API, selecting open objects from the art thematic collection. 24k images were obtained, from which the current dataset was derived. The labels were added by the V4Design team, using a custom annotation tool. As described in the project documentation, other categories were used besides Baroque and Rococo. But for the sake of training a machine learning model we have retained only the categories with a significant number of annotations source\n\nThis version of the dataset is generated using the CSV file hosted on Zenodo. This CSV file contains the labels with URLs for the relevant images. Some of these URLs no longer resolve to an image. For consitency with the original dataset and if these URLs become valid again, these rows of the data are preserved here. If you want only successfully loaded images in your dataset, you can filter out the missing images as follows.",
"### Supported Tasks and Leaderboards\n\nThis dataset is primarily intended for 'image-classification'.",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @davanstrien for adding this dataset."
] |
c4046158a56bfb31a1d03ab48d2b9b340bc2925f | ---
dataset_info:
- config_name: default
drop_labels: true
--- | polinaeterna/audios | [
"region:us"
] | 2022-10-27T10:28:42+00:00 | {} | 2022-11-03T12:47:07+00:00 | [] | [] | TAGS
#region-us
| ---
dataset_info:
- config_name: default
drop_labels: true
--- | [] | [
"TAGS\n#region-us \n"
] |
5b62ab4c6ef313d063a3c4da33cb14bb2fe94dc9 |
# Dataset Card for Early Printed Books Font Detection Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**https://doi.org/10.5281/zenodo.3366686
- **Paper:**: https://doi.org/10.1145/3352631.3352640
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.
[More Information Needed]
### Supported Tasks and Leaderboards
The primary use case for this datasets is
- `multi-label-image-classification`: This dataset can be used to train a model for multi label image classification where each image can have one, or more labels.
- `image-classification`: This dataset could also be adapted to only predict a single label for each image
### Languages
The dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.
[More Information Needed]
## Dataset Structure
This dataset has a single configuration.
### Data Instances
An example instance from this dataset:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3072x3840 at 0x7F6AC192D850>,
'labels': [5]}
```
### Data Fields
This dataset contains two fields:
- `image`: the image of the book page
- `labels`: one or more labels for the font used in the book page depicted in the `image`
### Data Splits
The dataset is broken into a train and test split with the following breakdown of number of examples:
- train: 24,866
- test: 10,757
## Dataset Creation
### Curation Rationale
The dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:
>data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.
### Source Data
#### Initial Data Collection and Normalization
The images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| biglam/early_printed_books_font_detection | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:expert-generated",
"size_categories:10K<n<100K",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-27T11:12:02+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": [], "license": ["cc-by-nc-sa-4.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "Early Printed Books Font Detection Dataset", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "greek", "1": "antiqua", "2": "other_font", "3": "not_a_font", "4": "italic", "5": "rotunda", "6": "textura", "7": "fraktur", "8": "schwabacher", "9": "hebrew", "10": "bastarda", "11": "gotico_antiqua"}}}}], "splits": [{"name": "test", "num_bytes": 2345451, "num_examples": 10757}, {"name": "train", "num_bytes": 5430875, "num_examples": 24866}], "download_size": 44212934313, "dataset_size": 7776326}, "tags": []} | 2022-10-28T14:39:50+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-expert-generated #size_categories-10K<n<100K #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for Early Printed Books Font Detection Dataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:URL
- Paper:: URL
- Leaderboard:
- Point of Contact:
### Dataset Summary
> This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.
### Supported Tasks and Leaderboards
The primary use case for this datasets is
- 'multi-label-image-classification': This dataset can be used to train a model for multi label image classification where each image can have one, or more labels.
- 'image-classification': This dataset could also be adapted to only predict a single label for each image
### Languages
The dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.
## Dataset Structure
This dataset has a single configuration.
### Data Instances
An example instance from this dataset:
### Data Fields
This dataset contains two fields:
- 'image': the image of the book page
- 'labels': one or more labels for the font used in the book page depicted in the 'image'
### Data Splits
The dataset is broken into a train and test split with the following breakdown of number of examples:
- train: 24,866
- test: 10,757
## Dataset Creation
### Curation Rationale
The dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:
>data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.
### Source Data
#### Initial Data Collection and Normalization
The images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for Early Printed Books Font Detection Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:URL\n- Paper:: URL\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\n> This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.",
"### Supported Tasks and Leaderboards\n\nThe primary use case for this datasets is\n- 'multi-label-image-classification': This dataset can be used to train a model for multi label image classification where each image can have one, or more labels. \n- 'image-classification': This dataset could also be adapted to only predict a single label for each image",
"### Languages\n\nThe dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.",
"## Dataset Structure\n\nThis dataset has a single configuration.",
"### Data Instances\n\nAn example instance from this dataset:",
"### Data Fields\n\nThis dataset contains two fields:\n\n- 'image': the image of the book page\n- 'labels': one or more labels for the font used in the book page depicted in the 'image'",
"### Data Splits\n\nThe dataset is broken into a train and test split with the following breakdown of number of examples: \n\n- train: 24,866 \n- test: 10,757",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:\n\n>data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-expert-generated #size_categories-10K<n<100K #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for Early Printed Books Font Detection Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:URL\n- Paper:: URL\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\n> This dataset is composed of photos of various resolution of 35'623 pages of printed books dating from the 15th to the 18th century. Each page has been attributed by experts from one to five labels corresponding to the font groups used in the text, with two extra-classes for non-textual content and fonts not present in the following list: Antiqua, Bastaπrda, Fraktur, Gotico Antiqua, Greek, Hebrew, Italic, Rotunda, Schwabacher, and Textura.",
"### Supported Tasks and Leaderboards\n\nThe primary use case for this datasets is\n- 'multi-label-image-classification': This dataset can be used to train a model for multi label image classification where each image can have one, or more labels. \n- 'image-classification': This dataset could also be adapted to only predict a single label for each image",
"### Languages\n\nThe dataset includes books from a range of libraries (see below for further details). The paper doesn't provide a detailed overview of language breakdown. However, the books are from the 15th-18th century and appear to be dominated by European languages from that time period. The dataset also includes Hebrew.",
"## Dataset Structure\n\nThis dataset has a single configuration.",
"### Data Instances\n\nAn example instance from this dataset:",
"### Data Fields\n\nThis dataset contains two fields:\n\n- 'image': the image of the book page\n- 'labels': one or more labels for the font used in the book page depicted in the 'image'",
"### Data Splits\n\nThe dataset is broken into a train and test split with the following breakdown of number of examples: \n\n- train: 24,866 \n- test: 10,757",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to help train and evaluate automatic methods for font detection. The paper describing the paper also states that:\n\n>data was cherry-picked, thus it is not statistically representative of what can be found in libraries. For example, as we had a small amount of Textura at the start, we specifically looked for more pages containing this font group, so we can expect that less than 3.6 % of randomly selected pages from libraries would contain Textura.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe images in this dataset are from books held by the British Library (London), Bayerische Staatsbibliothek München, Staatsbibliothek zu Berlin, Universitätsbibliothek Erlangen, Universitätsbibliothek Heidelberg, Staats- und Universitäatsbibliothek Göttingen, Stadt- und Universitätsbibliothek Köln, Württembergische Landesbibliothek Stuttgart and Herzog August Bibliothek Wolfenbüttel.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
8d0ff9103525b7e3579b180230fddb3186258301 |
# Tabular Benchmark
## Dataset Description
This dataset is a curation of various datasets from [openML](https://www.openml.org/) and is curated to benchmark performance of various machine learning algorithms.
- **Repository:** https://github.com/LeoGrin/tabular-benchmark/community
- **Paper:** https://hal.archives-ouvertes.fr/hal-03723551v2/document
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
- Regression from Numerical and Categorical Features
- Regression from Numerical Features
- Classification from Numerical and Categorical Features
- Classification from Numerical Features
### Supported Tasks and Leaderboards
- `tabular-regression`
- `tabular-classification`
## Dataset Structure
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
- reg_num: Task identifier for regression on numerical features.
- reg_cat: Task identifier for regression on numerical and categorical features.
- clf_num: Task identifier for classification on numerical features.
- clf_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing `task_name/dataset_name` to `data_files` argument of `load_dataset` like below:
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files="reg_cat/house_sales.csv")
```
## Dataset Creation
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
- **I.I.D. data**. We remove stream-like datasets or time series.
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)
reaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)
is below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],
but this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],
a close score for the simple and powerful models suggests that we are already close to the best achievable score.
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
**Numerical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|7.0|https://www.openml.org/d/151|https://www.openml.org/d/44120|
|covertype|566602.0|10.0|https://www.openml.org/d/293|https://www.openml.org/d/44121|
|pol|10082.0|26.0|https://www.openml.org/d/722|https://www.openml.org/d/44122|
|house_16H|13488.0|16.0|https://www.openml.org/d/821|https://www.openml.org/d/44123|
|MagicTelescope|13376.0|10.0|https://www.openml.org/d/1120|https://www.openml.org/d/44125|
|bank-marketing|10578.0|7.0|https://www.openml.org/d/1461|https://www.openml.org/d/44126|
|Bioresponse|3434.0|419.0|https://www.openml.org/d/4134|https://www.openml.org/d/45019|
|MiniBooNE|72998.0|50.0|https://www.openml.org/d/41150|https://www.openml.org/d/44128|
|default-of-credit-card-clients|13272.0|20.0|https://www.openml.org/d/42477|https://www.openml.org/d/45020|
|Higgs|940160.0|24.0|https://www.openml.org/d/42769|https://www.openml.org/d/44129|
|eye_movements|7608.0|20.0|https://www.openml.org/d/1044|https://www.openml.org/d/44130|
|Diabetes130US|71090.0|7.0|https://www.openml.org/d/4541|https://www.openml.org/d/45022|
|jannis|57580.0|54.0|https://www.openml.org/d/41168|https://www.openml.org/d/45021|
|heloc|10000.0|22.0|"https://www.kaggle.com/datasets/averkiyoliabev/home-equity-line-of-creditheloc?select=heloc_dataset_v1+%281%29.csv"|https://www.openml.org/d/45026|
|credit|16714.0|10.0|"https://www.kaggle.com/c/GiveMeSomeCredit/data?select=cs-training.csv"|https://www.openml.org/d/44089|
|california|20634.0|8.0|"https://www.dcc.fc.up.pt/ltorgo/Regression/cal_housing.html"|https://www.openml.org/d/45028|
**Categorical Classification**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|electricity|38474.0|8.0|https://www.openml.org/d/151|https://www.openml.org/d/44156|
|eye_movements|7608.0|23.0|https://www.openml.org/d/1044|https://www.openml.org/d/44157|
|covertype|423680.0|54.0|https://www.openml.org/d/1596|https://www.openml.org/d/44159|
|albert|58252.0|31.0|https://www.openml.org/d/41147|https://www.openml.org/d/45035|
|compas-two-years|4966.0|11.0|https://www.openml.org/d/42192|https://www.openml.org/d/45039|
|default-of-credit-card-clients|13272.0|21.0|https://www.openml.org/d/42477|https://www.openml.org/d/45036|
|road-safety|111762.0|32.0|https://www.openml.org/d/42803|https://www.openml.org/d/45038|
**Numerical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|cpu_act|8192.0|21.0|https://www.openml.org/d/197|https://www.openml.org/d/44132|
|pol|15000.0|26.0|https://www.openml.org/d/201|https://www.openml.org/d/44133|
|elevators|16599.0|16.0|https://www.openml.org/d/216|https://www.openml.org/d/44134|
|wine_quality|6497.0|11.0|https://www.openml.org/d/287|https://www.openml.org/d/44136|
|Ailerons|13750.0|33.0|https://www.openml.org/d/296|https://www.openml.org/d/44137|
|yprop_4_1|8885.0|42.0|https://www.openml.org/d/416|https://www.openml.org/d/45032|
|houses|20640.0|8.0|https://www.openml.org/d/537|https://www.openml.org/d/44138|
|house_16H|22784.0|16.0|https://www.openml.org/d/574|https://www.openml.org/d/44139|
|delays_zurich_transport|5465575.0|9.0|https://www.openml.org/d/40753|https://www.openml.org/d/45034|
|diamonds|53940.0|6.0|https://www.openml.org/d/42225|https://www.openml.org/d/44140|
|Brazilian_houses|10692.0|8.0|https://www.openml.org/d/42688|https://www.openml.org/d/44141|
|Bike_Sharing_Demand|17379.0|6.0|https://www.openml.org/d/42712|https://www.openml.org/d/44142|
|nyc-taxi-green-dec-2016|581835.0|9.0|https://www.openml.org/d/42729|https://www.openml.org/d/44143|
|house_sales|21613.0|15.0|https://www.openml.org/d/42731|https://www.openml.org/d/44144|
|sulfur|10081.0|6.0|https://www.openml.org/d/23515|https://www.openml.org/d/44145|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/44146|
|MiamiHousing2016|13932.0|14.0|https://www.openml.org/d/43093|https://www.openml.org/d/44147|
|superconduct|21263.0|79.0|https://www.openml.org/d/43174|https://www.openml.org/d/44148|
**Categorical Regression**
|dataset_name|n_samples|n_features|original_link|new_link|
|---|---|---|---|---|
|topo_2_1|8885.0|255.0|https://www.openml.org/d/422|https://www.openml.org/d/45041|
|analcatdata_supreme|4052.0|7.0|https://www.openml.org/d/504|https://www.openml.org/d/44055|
|visualizing_soil|8641.0|4.0|https://www.openml.org/d/688|https://www.openml.org/d/44056|
|delays_zurich_transport|5465575.0|12.0|https://www.openml.org/d/40753|https://www.openml.org/d/45045|
|diamonds|53940.0|9.0|https://www.openml.org/d/42225|https://www.openml.org/d/44059|
|Allstate_Claims_Severity|188318.0|124.0|https://www.openml.org/d/42571|https://www.openml.org/d/45046|
|Mercedes_Benz_Greener_Manufacturing|4209.0|359.0|https://www.openml.org/d/42570|https://www.openml.org/d/44061|
|Brazilian_houses|10692.0|11.0|https://www.openml.org/d/42688|https://www.openml.org/d/44062|
|Bike_Sharing_Demand|17379.0|11.0|https://www.openml.org/d/42712|https://www.openml.org/d/44063|
|Airlines_DepDelay_1M|1000000.0|5.0|https://www.openml.org/d/42721|https://www.openml.org/d/45047|
|nyc-taxi-green-dec-2016|581835.0|16.0|https://www.openml.org/d/42729|https://www.openml.org/d/44065|
|abalone|4177.0|8.0|https://www.openml.org/d/42726|https://www.openml.org/d/45042|
|house_sales|21613.0|17.0|https://www.openml.org/d/42731|https://www.openml.org/d/44066|
|seattlecrime6|52031.0|4.0|https://www.openml.org/d/42496|https://www.openml.org/d/45043|
|medical_charges|163065.0|5.0|https://www.openml.org/d/42720|https://www.openml.org/d/45048|
|particulate-matter-ukair-2017|394299.0|6.0|https://www.openml.org/d/42207|https://www.openml.org/d/44068|
|SGEMM_GPU_kernel_performance|241600.0|9.0|https://www.openml.org/d/43144|https://www.openml.org/d/44069|
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
[More Information Needed]
### Citation Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f
| inria-soda/tabular-benchmark | [
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"region:us"
] | 2022-10-27T11:34:58+00:00 | {"annotations_creators": [], "license": [], "task_categories": ["tabular-classification", "tabular-regression"], "pretty_name": "tabular_benchmark", "tags": [], "configs": [{"config_name": "clf_cat_albert", "data_files": "clf_cat/albert.csv"}, {"config_name": "clf_cat_compas-two-years", "data_files": "clf_cat/compas-two-years.csv"}, {"config_name": "clf_cat_covertype", "data_files": "clf_cat/covertype.csv"}, {"config_name": "clf_cat_default-of-credit-card-clients", "data_files": "clf_cat/default-of-credit-card-clients.csv"}, {"config_name": "clf_cat_electricity", "data_files": "clf_cat/electricity.csv"}, {"config_name": "clf_cat_eye_movements", "data_files": "clf_cat/eye_movements.csv"}, {"config_name": "clf_cat_road-safety", "data_files": "clf_cat/road-safety.csv"}, {"config_name": "clf_num_Bioresponse", "data_files": "clf_num/Bioresponse.csv"}, {"config_name": "clf_num_Diabetes130US", "data_files": "clf_num/Diabetes130US.csv"}, {"config_name": "clf_num_Higgs", "data_files": "clf_num/Higgs.csv"}, {"config_name": "clf_num_MagicTelescope", "data_files": "clf_num/MagicTelescope.csv"}, {"config_name": "clf_num_MiniBooNE", "data_files": "clf_num/MiniBooNE.csv"}, {"config_name": "clf_num_bank-marketing", "data_files": "clf_num/bank-marketing.csv"}, {"config_name": "clf_num_california", "data_files": "clf_num/california.csv"}, {"config_name": "clf_num_covertype", "data_files": "clf_num/covertype.csv"}, {"config_name": "clf_num_credit", "data_files": "clf_num/credit.csv"}, {"config_name": "clf_num_default-of-credit-card-clients", "data_files": "clf_num/default-of-credit-card-clients.csv"}, {"config_name": "clf_num_electricity", "data_files": "clf_num/electricity.csv"}, {"config_name": "clf_num_eye_movements", "data_files": "clf_num/eye_movements.csv"}, {"config_name": "clf_num_heloc", "data_files": "clf_num/heloc.csv"}, {"config_name": "clf_num_house_16H", "data_files": "clf_num/house_16H.csv"}, {"config_name": "clf_num_jannis", "data_files": "clf_num/jannis.csv"}, {"config_name": "clf_num_pol", "data_files": "clf_num/pol.csv"}, {"config_name": "reg_cat_Airlines_DepDelay_1M", "data_files": "reg_cat/Airlines_DepDelay_1M.csv"}, {"config_name": "reg_cat_Allstate_Claims_Severity", "data_files": "reg_cat/Allstate_Claims_Severity.csv"}, {"config_name": "reg_cat_Bike_Sharing_Demand", "data_files": "reg_cat/Bike_Sharing_Demand.csv"}, {"config_name": "reg_cat_Brazilian_houses", "data_files": "reg_cat/Brazilian_houses.csv"}, {"config_name": "reg_cat_Mercedes_Benz_Greener_Manufacturing", "data_files": "reg_cat/Mercedes_Benz_Greener_Manufacturing.csv"}, {"config_name": "reg_cat_SGEMM_GPU_kernel_performance", "data_files": "reg_cat/SGEMM_GPU_kernel_performance.csv"}, {"config_name": "reg_cat_abalone", "data_files": "reg_cat/abalone.csv"}, {"config_name": "reg_cat_analcatdata_supreme", "data_files": "reg_cat/analcatdata_supreme.csv"}, {"config_name": "reg_cat_delays_zurich_transport", "data_files": "reg_cat/delays_zurich_transport.csv"}, {"config_name": "reg_cat_diamonds", "data_files": "reg_cat/diamonds.csv"}, {"config_name": "reg_cat_house_sales", "data_files": "reg_cat/house_sales.csv"}, {"config_name": "reg_cat_medical_charges", "data_files": "reg_cat/medical_charges.csv"}, {"config_name": "reg_cat_nyc-taxi-green-dec-2016", "data_files": "reg_cat/nyc-taxi-green-dec-2016.csv"}, {"config_name": "reg_cat_particulate-matter-ukair-2017", "data_files": "reg_cat/particulate-matter-ukair-2017.csv"}, {"config_name": "reg_cat_seattlecrime6", "data_files": "reg_cat/seattlecrime6.csv"}, {"config_name": "reg_cat_topo_2_1", "data_files": "reg_cat/topo_2_1.csv"}, {"config_name": "reg_cat_visualizing_soil", "data_files": "reg_cat/visualizing_soil.csv"}, {"config_name": "reg_num_Ailerons", "data_files": "reg_num/Ailerons.csv"}, {"config_name": "reg_num_Bike_Sharing_Demand", "data_files": "reg_num/Bike_Sharing_Demand.csv"}, {"config_name": "reg_num_Brazilian_houses", "data_files": "reg_num/Brazilian_houses.csv"}, {"config_name": "reg_num_MiamiHousing2016", "data_files": "reg_num/MiamiHousing2016.csv"}, {"config_name": "reg_num_abalone", "data_files": "reg_num/abalone.csv"}, {"config_name": "reg_num_cpu_act", "data_files": "reg_num/cpu_act.csv"}, {"config_name": "reg_num_delays_zurich_transport", "data_files": "reg_num/delays_zurich_transport.csv"}, {"config_name": "reg_num_diamonds", "data_files": "reg_num/diamonds.csv"}, {"config_name": "reg_num_elevators", "data_files": "reg_num/elevators.csv"}, {"config_name": "reg_num_house_16H", "data_files": "reg_num/house_16H.csv"}, {"config_name": "reg_num_house_sales", "data_files": "reg_num/house_sales.csv"}, {"config_name": "reg_num_houses", "data_files": "reg_num/houses.csv"}, {"config_name": "reg_num_medical_charges", "data_files": "reg_num/medical_charges.csv"}, {"config_name": "reg_num_nyc-taxi-green-dec-2016", "data_files": "reg_num/nyc-taxi-green-dec-2016.csv"}, {"config_name": "reg_num_pol", "data_files": "reg_num/pol.csv"}, {"config_name": "reg_num_sulfur", "data_files": "reg_num/sulfur.csv"}, {"config_name": "reg_num_superconduct", "data_files": "reg_num/superconduct.csv"}, {"config_name": "reg_num_wine_quality", "data_files": "reg_num/wine_quality.csv"}, {"config_name": "reg_num_yprop_4_1", "data_files": "reg_num/yprop_4_1.csv"}]} | 2023-09-04T15:37:39+00:00 | [] | [] | TAGS
#task_categories-tabular-classification #task_categories-tabular-regression #region-us
| Tabular Benchmark
=================
Dataset Description
-------------------
This dataset is a curation of various datasets from openML and is curated to benchmark performance of various machine learning algorithms.
* Repository: URL
* Paper: URL
### Dataset Summary
Benchmark made of curation of various tabular data learning tasks, including:
* Regression from Numerical and Categorical Features
* Regression from Numerical Features
* Classification from Numerical and Categorical Features
* Classification from Numerical Features
### Supported Tasks and Leaderboards
* 'tabular-regression'
* 'tabular-classification'
Dataset Structure
-----------------
### Data Splits
This dataset consists of four splits (folders) based on tasks and datasets included in tasks.
* reg\_num: Task identifier for regression on numerical features.
* reg\_cat: Task identifier for regression on numerical and categorical features.
* clf\_num: Task identifier for classification on numerical features.
* clf\_cat: Task identifier for classification on categorical features.
Depending on the dataset you want to load, you can load the dataset by passing 'task\_name/dataset\_name' to 'data\_files' argument of 'load\_dataset' like below:
Dataset Creation
----------------
### Curation Rationale
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
* Heterogeneous columns. Columns should correspond to features of different nature. This excludes
images or signal datasets where each column corresponds to the same signal on different sensors.
* Not high dimensional. We only keep datasets with a d/n ratio below 1/10.
* Undocumented datasets We remove datasets where too little information is available. We did keep
datasets with hidden column names if it was clear that the features were heterogeneous.
* I.I.D. data. We remove stream-like datasets or time series.
* Real-world data. We remove artificial datasets but keep some simulated datasets. The difference is
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
* Not too small. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
benchmarks on numerical features only, we remove categorical features before checking if enough
features and samples are remaining.
* Not too easy. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)
reaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)
is below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],
but this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],
a close score for the simple and powerful models suggests that we are already close to the best achievable score.
* Not deterministic. We remove datasets where the target is a deterministic function of the data. This
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
datasets are very different from most real-world tabular datasets, and should be studied separately
### Source Data
Numerical Classification
Categorical Classification
Numerical Regression
Categorical Regression
### Dataset Curators
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.
### Licensing Information
Léo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep
learning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New
Orleans, United States. ffhal-03723551v2f
| [
"### Dataset Summary\n\n\nBenchmark made of curation of various tabular data learning tasks, including:\n\n\n* Regression from Numerical and Categorical Features\n* Regression from Numerical Features\n* Classification from Numerical and Categorical Features\n* Classification from Numerical Features",
"### Supported Tasks and Leaderboards\n\n\n* 'tabular-regression'\n* 'tabular-classification'\n\n\nDataset Structure\n-----------------",
"### Data Splits\n\n\nThis dataset consists of four splits (folders) based on tasks and datasets included in tasks.\n\n\n* reg\\_num: Task identifier for regression on numerical features.\n* reg\\_cat: Task identifier for regression on numerical and categorical features.\n* clf\\_num: Task identifier for classification on numerical features.\n* clf\\_cat: Task identifier for classification on categorical features.\n\n\nDepending on the dataset you want to load, you can load the dataset by passing 'task\\_name/dataset\\_name' to 'data\\_files' argument of 'load\\_dataset' like below:\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThis dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:\n\n\n* Heterogeneous columns. Columns should correspond to features of different nature. This excludes\nimages or signal datasets where each column corresponds to the same signal on different sensors.\n* Not high dimensional. We only keep datasets with a d/n ratio below 1/10.\n* Undocumented datasets We remove datasets where too little information is available. We did keep\ndatasets with hidden column names if it was clear that the features were heterogeneous.\n* I.I.D. data. We remove stream-like datasets or time series.\n* Real-world data. We remove artificial datasets but keep some simulated datasets. The difference is\nsubtle, but we try to keep simulated datasets if learning these datasets are of practical importance\n(like the Higgs dataset), and not just a toy example to test specific model capabilities.\n* Not too small. We remove datasets with too few features (< 4) and too few samples (< 3 000). For\nbenchmarks on numerical features only, we remove categorical features before checking if enough\nfeatures and samples are remaining.\n* Not too easy. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)\nreaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)\nis below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],\nbut this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],\na close score for the simple and powerful models suggests that we are already close to the best achievable score.\n* Not deterministic. We remove datasets where the target is a deterministic function of the data. This\nmostly means removing datasets on games like poker and chess. Indeed, we believe that these\ndatasets are very different from most real-world tabular datasets, and should be studied separately",
"### Source Data\n\n\nNumerical Classification\n\n\n\nCategorical Classification\n\n\n\nNumerical Regression\n\n\n\nCategorical Regression",
"### Dataset Curators\n\n\nLéo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.",
"### Licensing Information\n\n\nLéo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep\nlearning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New\nOrleans, United States. ffhal-03723551v2f"
] | [
"TAGS\n#task_categories-tabular-classification #task_categories-tabular-regression #region-us \n",
"### Dataset Summary\n\n\nBenchmark made of curation of various tabular data learning tasks, including:\n\n\n* Regression from Numerical and Categorical Features\n* Regression from Numerical Features\n* Classification from Numerical and Categorical Features\n* Classification from Numerical Features",
"### Supported Tasks and Leaderboards\n\n\n* 'tabular-regression'\n* 'tabular-classification'\n\n\nDataset Structure\n-----------------",
"### Data Splits\n\n\nThis dataset consists of four splits (folders) based on tasks and datasets included in tasks.\n\n\n* reg\\_num: Task identifier for regression on numerical features.\n* reg\\_cat: Task identifier for regression on numerical and categorical features.\n* clf\\_num: Task identifier for classification on numerical features.\n* clf\\_cat: Task identifier for classification on categorical features.\n\n\nDepending on the dataset you want to load, you can load the dataset by passing 'task\\_name/dataset\\_name' to 'data\\_files' argument of 'load\\_dataset' like below:\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThis dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:\n\n\n* Heterogeneous columns. Columns should correspond to features of different nature. This excludes\nimages or signal datasets where each column corresponds to the same signal on different sensors.\n* Not high dimensional. We only keep datasets with a d/n ratio below 1/10.\n* Undocumented datasets We remove datasets where too little information is available. We did keep\ndatasets with hidden column names if it was clear that the features were heterogeneous.\n* I.I.D. data. We remove stream-like datasets or time series.\n* Real-world data. We remove artificial datasets but keep some simulated datasets. The difference is\nsubtle, but we try to keep simulated datasets if learning these datasets are of practical importance\n(like the Higgs dataset), and not just a toy example to test specific model capabilities.\n* Not too small. We remove datasets with too few features (< 4) and too few samples (< 3 000). For\nbenchmarks on numerical features only, we remove categorical features before checking if enough\nfeatures and samples are remaining.\n* Not too easy. We remove datasets which are too easy. Specifically, we remove a dataset if a simple model (max of a single tree and a regression, logistic or OLS)\nreaches a score whose relative difference with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn)\nis below 5%. Other benchmarks use different metrics to remove too easy datasets, like removing datasets perfectly separated by a single decision classifier [Bischl et al., 2021],\nbut this ignores varying Bayes rate across datasets. As tree ensembles are superior to simple trees and logistic regresison [Fernández-Delgado et al., 2014],\na close score for the simple and powerful models suggests that we are already close to the best achievable score.\n* Not deterministic. We remove datasets where the target is a deterministic function of the data. This\nmostly means removing datasets on games like poker and chess. Indeed, we believe that these\ndatasets are very different from most real-world tabular datasets, and should be studied separately",
"### Source Data\n\n\nNumerical Classification\n\n\n\nCategorical Classification\n\n\n\nNumerical Regression\n\n\n\nCategorical Regression",
"### Dataset Curators\n\n\nLéo Grinsztajn, Edouard Oyallon, Gaël Varoquaux.",
"### Licensing Information\n\n\nLéo Grinsztajn, Edouard Oyallon, Gaël Varoquaux. Why do tree-based models still outperform deep\nlearning on typical tabular data?. NeurIPS 2022 Datasets and Benchmarks Track, Nov 2022, New\nOrleans, United States. ffhal-03723551v2f"
] |
187435967cbdfa88395fd379e9f403c8b6ac46f3 | # AutoTrain Dataset for project: lojban-translation
## Dataset Description
This dataset has been automatically processed by AutoTrain for project lojban-translation.
### Languages
The BCP-47 code for the dataset's language is en2jb.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "I read the poem for my child.",
"target": "mi tcidu lo pemci te cu'u le panzi be mi"
},
{
"source": "Jim is learning how to drive a car.",
"target": "la jim cilre fi lo nu klasazri lo karce"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8000 |
| valid | 2000 | | woctordho/autotrain-data-lojban-translation | [
"task_categories:translation",
"language:en",
"language:jbo",
"license:mit",
"region:us"
] | 2022-10-27T12:05:43+00:00 | {"language": ["en", "jbo"], "license": "mit", "task_categories": ["translation"]} | 2023-11-17T11:18:19+00:00 | [] | [
"en",
"jbo"
] | TAGS
#task_categories-translation #language-English #language-Lojban #license-mit #region-us
| AutoTrain Dataset for project: lojban-translation
=================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project lojban-translation.
### Languages
The BCP-47 code for the dataset's language is en2jb.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en2jb.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-translation #language-English #language-Lojban #license-mit #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en2jb.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
a95e3d32256c9b0b1048b517554c9cf29adf3f2a | # AutoTrain Dataset for project: company-description-generator
## Dataset Description
This dataset has been automatically processed by AutoTrain for project company-description-generator.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Id": "0014U00002aSdZIQA0",
"text": "High Heat Rejection Window Film. Blocks 99% UV Rays and Rejects Up To 99% of Infrared Heat Radiation. 360\u00b0 Protection From Bacteria, Germs, & Viruses On Surfaces For Up To 90 days. EPA Registered. Safe For Food Industries, Hospitals, Schools, and More. CONCRETE & SURFACE COATINGS. Protective Coatings That Repel Water, Oils, Dirt, and More. Keeps Surfaces Protected and Easier To Clean. Keep Metal Surfaces Intact With A Strong Nanocoating Protectant The Mitigates The Growth Of Corrosion In Extreme Environments. At Snapguard Solutions we specialize in industrial nanocoatings to meet your needs. Through nanotechnology we are able to prolong and enhance the life of everyday commercial and residential items. Whether its blocking out the sun or repelling water, we have the right solution for you. Untreated surfaces absorb water and other liquids. This damages and deteriorates the integrity over time. Solution Applied on Surface. Our solutions fill and cover any imperfections on a surface, creating an invisible layer of protection designed to increase the longevity of the material. The treated surface is breathable and repels waters and other liquids. It can resists other elements such as snow, salt and mechanical oils. Utilize the same nanotechnology to protect what matters to you the most. Protect existing settings from the elements they encounter on the daily. Nanocoatings that can be applied to protect industrial settings and machinery. Multiple coatings available for all defense teams. SnapGuard Solutions, LLC is the leading innovator of advanced nano-technology solutions for the residential, industrial, commercial, and defense industry. Our solutions are ideal for protecting various porous and nonporous surfaces from water damage, stains, UV Light, corrosion, and dirt. Our product line includes: Glass Protectant, Fabric Protectant, One-Time Sealer, Solar Protectant, and Nano-Ceramic Tint. Fog build up can make it dangerous to see. Our nanotechnology based anti-fog films are the solution you need to prevent fog. It's application can be easily done and applied to any glass, mirror, or plastic in just a matter of minutes. AUTOMOTIVE 100% effective and durable. Our anti-fog films can be used in your automobile so you can be safe out. VISOR/GOGGLES Easily apply an anti-fog liner to any goggle or visor shield. See in high definition clarity. INDUSTRIAL Our films will not interfere with any radio, GPS, or cellular connections. Stay connected and protected from the sun. DEFENSE Keep clear visibility at all times and in any weather. Our Anti-Fog protective films are military grade certified. We are here to provide the correct solutions for you. Send us a brief message explaining what services you may require. One of our representatives will get back to you shortly. Thank you. LIFETIME WARRANTY FOR NANO CERAMIC WINDOW TINT. To Activate Your Limited Lifetime Warranty For Nano Ceramic window tint please to fill out the form. What is Covered and How Long Coverage Lasts. Snapguard Solutions warrants professionally sold and installed Snapguard Solutions Nano Ceramic Window Tint against the defects in manufacture or materials set forth below and for the time period set forth below. This warranty is valid only if the Products application was performed by a. Installer in the United States in accordance with manufacturer\u2019s application procedures and applicable law. This limited lifetime warranty coverage is offered only to the owner of the tint film at the time of the Product\u2019s installation, and is not transferable. Authorized dealers are also covered. To extend the life and looks of your. Nano Ceramic Window Tint Film and to maintain your warranty coverage, certain care and maintenance should be followed. Do not roll down Tinted windows for 6 days and until the Tint has properly adhered to the glass. Do not wash the film for 30 days after installation. Do not use abrasive cleaners or coarse cloths. Use a mild soap and a clean, soft cloth or synthetic sponge. THE EXPRESS WARRANTIES CONTAINED IN THIS AGREEMENT ARE IN LIEU OF ALL OTHER WARRANTIES, EXPRESS OR IMPLIED. SNAPGUARD SOLUTIONS HEREBY DISCLAIMS ALL OTHER EXPRESS AND IMPLIED WARRANTIES, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL SNAPGUARD SOLUTIONS OR ANY INSTALLER BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY KIND ARISING OUT OF OR RELATED TO (1) THE USE OF OR INABILITY TO USE THE PRODUCT, (2) THE BREACH OF ANY WARRANTY OR OF THIS AGREEMENT, (3) ANY ACT OR FAILURE TO ACT RELATING TO THIS AGREEMENT, OR OTHERWISE, INCLUDING WITHOUT LIMITATION DAMAGES FOR LOSS OF USE, LOST PROFITS, INTERRUPTION OF BUSINESS, OR ANY OTHER MONETARY OR OTHER LOSS, REGARDLESS OF THE FORM OF ACTION WHETHER IN CONTRACT, TORT (INCLUDING NEGLIGENCE) STRICT PRODUCT LIABILITY, OR OTHERWISE, EVEN IF SNAPGUARD SOLUTIONS HAS BEEN ADVISED OF OR IS OTHERWISE AWARE OF THE POSSIBILITY OF SUCH DAMAGES. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO YOU. How State/Provincial Law Applies. This warranty gives you specific legal rights, and you may also have other rights that vary from jurisdiction to jurisdiction. EXCLUSIONS AND MISCELLANEOUS TERMS AND CONDITIONS (1) This warranty does not cover or apply to losses, costs, damages or defects arising from or caused by improper Product application, improper Product care, cleaning or abuse, misuse (including use not complying with applicable law? non-automotive applications, natural causes, accident, ordinary wear, damage caused by road debris, the physical impact of rocks, abrasion or scratching or any other acts, occurrence or defects, faults or damages not related to defects in materials or manufacture of the Product. Except as otherwise provided by applicable law, illegal application or use of the Product will render all warranties, whether express or implied, null and void and of no effect and. Snapguard Solutions shall have no liability therefor. (2) The. Snapguard Solutions dealer/installer is an independent contractor of. Snapguard Solutions is not responsible for improper installation or representations made by the dealer/installer. No contractor, including the. Snapguard Solutions dealer/installer, has any authority or power to modify or extend this limited warranty. The benefits under this warranty shall be the sole and exclusive remedy against. Snapguard Solutions for any loss arising out of the sale, application, and/or use of the Product. (3) If any provision of this warranty is unenforceable or ineffective, the rest of this warranty shall remain in effect and be construed as if such provision had not been contained in this warranty. (4) This warranty shall be governed by California law, excluding its laws relating to choice of law. Regardless of what venue would otherwise be permissive or required,. Snapguard Solutions and the customer stipulate that all actions arising under or related to this warranty shall be brought in the federal or state courts located in the City of Los Angeles, California,. Snapguard Solutions and the customer agree that such forum is mutually convenient and bears a reasonable relationship to this Agreement, and waive objection to any venue laid therein. HOW TO RECEIVE WARRANTY SERVICE. If you believe your. Nano Ceramic window tint is defective, please contact 1-323-797-7130 to see eligibility. Send along with the UPC code from the original packaging and a legible copy of your original receipt that includes the retailer name and address, date of purchase, and mail postage paid, to: Snapguard Solutions. Attn: Warranty Service Dept. 2150 Chenault Drive Carrollton, TX 75006. Snapguard Solutions product is covered by this limited warranty,. Snapguard Solutions will mail you replacement film. If your. Snapguard Solutions product is not covered by this limited warranty,. Snapguard Solutions will notify you of its decision in writing. Manufacturers\u2019 warranties may not apply in all cases, depending on factors such as use of the product, where the product was purchased, or who you purchased the product from. Please review the warranty carefully, and contact. Snapguard Solutions if you have any questions. Showing 34 of 34 products. Anti-Fog Film - 12 in x 18 in. Fabric Concentrate Water & Stain Repellent - 250ml. Fabric Protectant Water & Stain Repellent Spray - 200ml. Metal Protectant - 250ml. Nano Ceramic Window Tint - 2' ft x 100' ft. Nano Ceramic Window Tint - 2' ft x 25' ft. Nano Ceramic Window Tint - 2' ft x 50' ft. Nano Ceramic Window Tint - 2' ft x 6.5' ft. Nano Ceramic Window Tint - 2.5' ft x 12' ft. Nano Ceramic Window Tint - 2.5' ft x 50' ft. Nano Ceramic Window Tint - 2.5' ft x 6.5' ft. THIS ITEM EXCLUDED FROM ALL SALES. ",
"feat_Website": "https://snapguardsolutions.com",
"feat_scraping_date": "2022-10-12 19:05:50.082577+00:00",
"feat_Crunchbase_URL__c": "https://www.crunchbase.com/organization/snapguard",
"feat_Description": "SnapGuard Solutions, LLC is the leading innovator of advanced nano-technology solutions for the residential, industrial, commercial, and defense industry. Our solutions are ideal for protecting various porous and nonporous surfaces from water damage, stains, UV Light, corrosion, and dirt. Our product line includes: Glass Protectant, Fabric Protectant, One-Time Sealer, Solar Protectant, and Nano-Ceramic Tint.",
"feat_Name": "Snapguard",
"target": "Snapguard Solutions is a sealer for all natural stone and concrete material repels water and oil."
},
{
"feat_Id": "0012E00002gb2TiQAI",
"text": "A Game-Changing Mental Health and Wellbeing Solution for Employers, Employees and Insurers to help improve your employees' health and wellbeing at work. 24/7 access to unlimited mental health and wellbeing supports, including a personal Mental Health Coach and open-ended therapy, at the touch of a button. Burnout can cost employers as much as 8.3% of an employee\u2019s annual salary. While we\u2019ve been focused on dealing with the challenges of COVID-19, another crisis has been plaguing workplaces \u2013 burnout. The personal and financial costs of burnout are so great that no employer can afford to ignore it. Give your employees the support they need, when they need it. A complete range of supports to help your employees stay on top of their mental health at all times. Access to unlimited Mental Health Coaching to provide support and set goals wherever and whenever employees need it. Open-ended Mental Health Therapy Sessions with Psychotherapists and Counsellors. Concierge into Mental Health Insurance Benefits and Community Supports. Mental Health Digital Tools. Access to 100s of Digital Tools, Personalised Paths, Exercises and Tips for Mental Fitness delivered via video and podcasts. 24/7 Mental Health Support in Seconds by Phone, WhatsApp or Live Chat. We look after over 1,500 clients and support 1,000,000 employees, students and insurance members. How we Make an Impact. Our market-leading mental health supports can make a real, tangible difference to your employees and your business. increase in mentally healthy employees. decrease in reporting as severely distressed. We take a holistic approach to mental health and provide whatever supports are needed across body, mind and life with a comprehensive range of additional wellbeing services. Mental Health Training & Events. A whole range of seminars, workshops and 1-1 consultations offered digitally and onsite, delivered by experienced professionals. Digital Gym & Wellbeing Series. A digital gym, topical wellbeing series and bespoke events delivered by experts and guest presenters from our digital wellbeing studio. Strategic mental health programmes designed in consultation with an organisation from policy setting through to training and promotion. Discover how Total Mental Health can support your employees. Our Digital Studio and Digital Clinic solutions offer convenient access to a range of qualified & vetted clinical, fitness and wellbeing experts. A high quality, engaging experience to support employees at home or on the move. A year round series of weekly, wellbeing seminars focused on topical themes. Delivered from our 4G digital studio by our health and wellbeing presenter who is joined by a variety of expert guests. A weekly schedule of live and on demand fitness classes, delivered by experts who will demonstrate a safe and maintainable way to tackle fitness at home. Compliant with best practices. Cost effective, long term support. Digital Health & Wellbeing Solutions. Get access to fantastic weekly live streamed webinars from our 4K Wellbeing Studio. Each week contains a new topic delivered by an engaging host featuring a range of experts on that topic. With over 20 class types and 80 Live Streams per month, our Digital Gym has been extremely popular with employees who enjoy the variety of classes, expert delivery and convenience of being able to attend a class live when scheduled or access the same class at a time that suits them. Give your employees access to a range of Health Experts right in their Health and Wellbeing Platform. Book sessions with Physios, Nutritionists, Parenting Coaches, Remote Working Experts and Ergonomic Specialists. Peace of mind knowing that your employees\u2019 wellbeing needs are supported if they continue to work remotely. Strengthen workplace wellbeing and improve the overall atmosphere and culture where you work. Enable vital 1-1 opportunities to access a variety of wellbeing experts from home or on the move. Show employees that they are valued, and attract top talent with innovative wellbeing calendar of events and fitness. Access to truly engaging conversations about a range of topical wellbeing themes. Opportunities to put health & wellbeing questions to experts across a wide range of topics. Access to expert teams to consult with you wherever you are for the best advice to get you on the right track. A daily fitness schedule to participate in from the comfort of your home. Book a variety of digital and onsite workplace wellness events for your organisation, from Mental Health to Beauty. Access 100s of insured, qualified & vetted workplace wellbeing experts. Health risks will be significantly reduced, resulting in lower absenteeism and presenteeism rates. An improved cultural atmosphere develops as a result of a sense of togetherness, and often fun. An increased feeling of being valued among employees, which results in high levels of loyalty and retention. Improved employer brand. Having regular onsite wellness events is another reason for people to want to join your company. Employees will be equipped with the knowledge needed to focus on improving particular aspects of their wellbeing. Onsite wellbeing events give employees the chance to engage with one another in a different setting. Improved health and an increased sense of personal wellbeing, both physically and mentally. A heightened sense of value and belonging \u2014 it's important that employees feel as though their company cares about their wellbeing. Book your workplace wellbeing onsite events with access to 1000s of qualified wellness experts. Promote your onsite wellbeing event among employees, easily through the platform. Track event attendance and engagement to gain a better understanding of what interests employees most. Ask about onsite wellbeing. Spectrum.Life is the largest provider of employer health and wellness services in Ireland, and we're now available across the UK too! We look after the health and wellness needs of 100s of clients and over 500,000 users. Spectrum Life is the only Workplace Wellbeing provider that gives you digital and onsite wellbeing, all through one connected solution. We\u2019re combining Onsite Wellness, Digital Wellbeing, Employee Assistance Programmes and Health Screening managed on one platform and that\u2019s never been done before. With years of experience in managing workplace wellness for many different organisations, we noticed that having to go to various vendors for different elements of wellbeing was a pain point for a lot of people. We developed a platform that enables those tasked with managing wellbeing in the workplace to book and manage all aspects of it in one place. We pride ourselves on advising our clients on the latest approaches, technology, and wellness initiatives to ensure the best return on investment. Over the years, we have invested heavily in our tech team and also in our wellness team so that we can deliver a range of modern and innovative services that will evaluate, engage and energise your employees and their families to make behavioural changes and most importantly to stick to them. Spectrum.Life makes workplace wellbeing more manageable and accessible than ever for companies of all sizes. It\u2019s customisable, it\u2019s easy to use\u2026 it is Where Wellbeing Works. Learn how we have helped our clients achieve success in workplace wellbeing. Increasing engagement in workplace wellbeing. Wellbeing in a dynamic workplace. The New Benchmark in Employee Mental Health. A complete mental health and wellbeing programme for employers, employees and insurers. We provide employees with unlimited 24/7 access to unlimited mental health and wellbeing supports, including a personal Mental Health Coach and open-ended therapy, at the touch of a button. What is Total Mental Health. Employees can select a Mental Health Coach for regular live or via text one-to-one coaching on areas from \u2018improving sleep\u2019 to \u2018managing anxiety\u2019. Employees can Access open-ended Therapy via Counselling or Psychotherapy from a network of 1,000+ Counsellors within 48 hours of a referral. 24/7 On Demand Support. Employees can contact our Mental Health Team for support and on Demand in Seconds via Phone, Chat, WhatsApp or SMS. Increase in mentally healthy employees after using Spectrum.Life Mental Health Services. Return on Investment versus employees not receiving mental health support. Decrease in employees reporting as severely distressed after using Spectrum.Life Mental Health Services. Increase in productivity reported by Employees after using Spectrum.Life Mental Health Services. The Total Mental Health Experience. Mental Health Coaching offers employees preventative care and makes mental health support more accessible to everyone. Open Ended Mental Health Therapy. Open-Ended and Unlimited Therapy based on need, not quotas. Access to our network of 1,000+ Counsellors within 48 hours of a referral. Reassurance that your Employees, Leaders and Managers can speak to a Qualified Counsellor anytime, 24/7, 365. Advanced Mental Health Concierge. Care and Support into Inpatient Facilities, and Referral to a Mental Heath Specialist & mental health occupational assessments. Less than 1 in 4 people are getting the Mental Health support they need. Waiting Lists for Mental Health services are routinely over 6 months and Mental Health issues are on the increase. Peace of Mind \u2018I know all my employees will be safe, even if they don\u2019t talk to us about their problems\u2019. Employee are 76% more likely to join an organisation which has a clear commitment to mental health. There is often a stigma about asking for help. Research shows that 70%+ of people would choose a Coach over a Therapist, Employee Assistance Program or GP. Ease of discovery and access \u2018I know where to turn if I have a problem\u2019, \u2018I can always find an answer with Spectrum.Life'. Therapy is not always the solution \u2013 Coaching is a preventative measure for employees who need help with a breakthrough goal or who are struggling but don\u2019t need Therapy. Accessible at home and in the workplace, making it the perfect tool for employees and managers in a hybrid working world. Get first-hand data on the effects of workplace wellbeing and learn how this can be applied to your organisation. A Report on Mental Health in the Workplace \u2013 The Value of Having a Mental Health Programme in Your Organisation. The EAP Report- The effectiveness of EAP on workplace mental health. Mental Health in the Workplace. Digital and Onsite workplace mental health events are a great way to disassemble any stigma that may be present among employees. They also enable employees to understand their own mental wellbeing. Book seminars, training workshops and consultation clinics delivered by qualified mental health professionals. Our mental health seminars for the workplace are delivered by accredited mental health professionals. They are an effective way for employees to learn how best to manage and improve their mental wellbeing. These mental health training workshops empower specific groups of employees to build a stronger awareness about mental health in the workplace. Arrange mental health training for employees at your organisation to improve their ability to support colleagues in distress and to help them improve their own lifestyle habits. Why Book a Mental Health Event. Create a mental health-positive work environment. Help employees be proactive with their mental health. Give employees the tools to mind their mental health. Make your organisation a happy place to work. Gain tangible insights and learn how to best enhance employee wellbeing with our company guides. A Guide to Organising Workplace Mental Health Workshops. Employee Guide to Better Mental Health. HR Manager\u2019s Guide to Employee Financial Wellbeing. Sleep is a core component of our health and wellbeing and its impact on an organisation should not be overlooked. Poor sleep health can have a negative impact on businesses at an operational level. From absenteeism caused by related mental and physical illnesses to decreased levels of productivity, there is no denying that problems with sleep among employees affects the workplace. In this guide, we will highlight what sleep health is, how it impacts the workplace and how organisations can strive to improve it. What Is Sleep Health. Sleep is an essential part of our health and wellbeing. In fact, it is just as essential as nutrition and exercise. Unfortunately, many of us simply aren\u2019t getting enough sleep to maintain optimum cognitive function. Approximately 1 in 3 people are surviving on 6 hours or less. Most of us accept this as normal, however, consistently sleeping for less than the recommended hours can affect our wellbeing in several different ways. Sleep health also refers to the quality of sleep we get, whether it\u2019s restful enough, if it was interrupted and what our bedtime routine is like. A healthy sleep pattern means. You get an appropriate amount of sleep. You sleep throughout the night. You fall asleep within 20 minutes of going to bed. You feel energised when you awake. There are many factors that can influence our quality of sleep. We have an internal body clock that regulates our energy levels and tells us when our body is ready to sleep, but this can be impacted by our nutritional intake, our stress levels, our physical activity and external factors like screen time, noise pollution and so on. Sleep Health Impact in the Workplace. Billions of Euro are lost in companies worldwide as a result of insomnia and other sleep difficulties. It\u2019s been noted in recent studies that employers are becoming increasingly aware of the impact poor sleep health has among workers in their organisations. Lack of sleep or poor sleep quality negatively impacts employee performance. Millions of productive days are lost in organisations due to the impact poor sleep health has on productivity levels, and it\u2019s a direct influence on absenteeism. Poor sleep health also indirectly effects the workplace, as chronic sleep issues can cause mental and physical health difficulties that result in absences and decreased levels of productivity and engagement. In a culture where being \u201cbusy\u201d and overworked is worn as a badge of honour, sleep has become somewhat devalued in western society with disregard for how exactly it can impact our performance at work, and in other aspects of our lives. Lack of sleep impacts efficiency, productivity and more mistakes, according to a Harvard report. There has also been ample research that indicates that REM sleep is beneficial to the creative process, helping us to think outside the box. This stage of sleep is also essential for aiding problem solving. With this in mind, it\u2019s clear to see that under-sleeping employees are not performing to the best of their abilities, which ultimately results in under performance at a business level. Perhaps most concerning is the impact poor sleep health has on workplace safety. The same Harvard report says that between 50,000 and 100,000 deaths occur per year as a result of workers of all professions not getting enough sleep. It is also noted that more than a million workplace injuries occur due to sleep deprivation. The study noted that some of the deadliest accidents in recent times, including the explosion of the space shuttle Challenger, were caused by sleep deprivation in workers. Company Sleep Health In Numbers. Organisations & Sleep Health - How to Offer Help. Sleep is as crucial to performance and productivity as it is to physical as well as mental health. However, as a non\u2013 work activity that is heavily influenced by physical, mental and emotional wellbeing, organisations must find innovative ways to improve the sleep health of their employees. Sleep Health & Wellbeing. Including sleep health as part of a workplace wellbeing programme is one such way. As a practical solution for organisations to help employees understand and manage their own sleep health, a wellbeing programme can help with. Personal or work-related problems. Social, emotional and physical stress. Maintaining a work-life balance. A workplace wellbeing programme can also provide a dynamic platform and marketplace to share best\u2013practice expertise on the subject of sleep health. Through seminars, webinars and articles written by experts, employees can access knowledge and information in a variety of different digital and onsite formats to suit their particular working practices. In addition to the latest approaches, technology and wellness initiatives, employees can also seek advice from sleep health experts who can offer evidence \u2013based sleep training, workshops and private consultations. These qualified and experienced professionals can also help HR managers within an organisation to. Identify any company policies or behaviours that may be seen as a threat to sleep health. Implement a stand\u2013alone sleep management programme. Address sleep as part of an overall health and wellbeing strategy. In recognition of the impact that aspects of physical and mental wellbeing have on our sleep health, organisations can also use a workplace wellbeing programme to help employees understand sleep within a wider context. Sleep is important for our physical, social, intellectual and emotional wellbeing. So too is its co\u2013dependent relationship with nutrition and fitness. That\u2019s why it\u2019s important that employees have access to a workplace wellbeing programme that offers a whole wellness approach. 32,000,000 People In the UK have anti-social working patterns 25-30% higher risk of injury than working day shifts. Organisations & Sleep Health - More Ways to Help. A company culture that supports the work\u2013life balance can help employees make small but incremental changes to improve sleep. Organisations can support employees in adopting workplace habits such as. Taking regular breaks from screens. Learning to handle stress. In the age of connectivity, organisations can also help their employees protect their downtime, by allowing them to switch off from any social and productive requirements placed on them. Organisations can support employees adopting leisure habits such as. Cutting back on alcohol and caffeine. Switching off the mobile phone and that \u2018always on\u2019 blue light. Going to bed earlier and at the same time every night. The complexities of sleep can\u2019t be understood overnight. However, with a long\u2013term commitment to a workplace wellbeing programme, organisations can take clear and practical steps towards improving the sleep health of their employees. Having a workplace wellbeing programme that is rich in content and highly accessible will not only give employees the education and support they need to actively take responsibility for their own sleep health, but the motivation to make the behavioural changes necessary to reap the long\u2013term rewards of improved sleep. Percentage of workers who say their job allows them to get enough sleep. SHIFT WORK \u2013 63%. NON SHIFT WORK \u2013 89%. Do you want to put sleep health on the agenda of your workplace wellbeing programme? Talk to a wellness advisor about how we can help. ",
"feat_Website": "https://www.spectrum.life",
"feat_scraping_date": "2022-09-08 01:05:34.389293+00:00",
"feat_Crunchbase_URL__c": "https://www.crunchbase.com/organization/spectrum-life",
"feat_Description": "Spectrum.Life's comprehensive solution enables organisations to provide a workplace wellbeing programme that can have a substantially positive impact on their health and wellness, as well as on the culture and performance of the company.",
"feat_Name": "Spectrum Life",
"target": "Spectrum Life is a B2B mental health & wellness platform providing a clinically-backed product suite of tools and training."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_Website": "Value(dtype='string', id=None)",
"feat_scraping_date": "Value(dtype='string', id=None)",
"feat_Crunchbase_URL__c": "Value(dtype='string', id=None)",
"feat_Description": "Value(dtype='string', id=None)",
"feat_Name": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2397 |
| valid | 600 |
| mindthebridge/autotrain-data-company-description-generator | [
"language:en",
"region:us"
] | 2022-10-27T12:47:29+00:00 | {"language": ["en"], "task_categories": ["conditional-text-generation"]} | 2022-10-27T12:49:03+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| AutoTrain Dataset for project: company-description-generator
============================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project company-description-generator.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
cb7f336db3519b9ce33ca2dcd11cf0e306f56dea | # Dataset Card for Product Reviews
Customer reviews of Amazon products, categorised by the number of stars assigned to each product. The dataset consists of several thousand reviews in English, German, and French.
## Licensing information
This datasets is based on the [`amazon_reviews_multi`](https://huggingface.co/datasets/amazon_reviews_multi) dataset. | mgb-dx-meetup/product-reviews | [
"region:us"
] | 2022-10-27T14:11:15+00:00 | {"dataset_info": {"features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 454952.85, "num_examples": 1500}, {"name": "train", "num_bytes": 6073361.466666667, "num_examples": 20000}], "download_size": 4034850, "dataset_size": 6528314.316666666}} | 2022-10-27T14:25:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for Product Reviews
Customer reviews of Amazon products, categorised by the number of stars assigned to each product. The dataset consists of several thousand reviews in English, German, and French.
## Licensing information
This datasets is based on the 'amazon_reviews_multi' dataset. | [
"# Dataset Card for Product Reviews\n\nCustomer reviews of Amazon products, categorised by the number of stars assigned to each product. The dataset consists of several thousand reviews in English, German, and French.",
"## Licensing information\n\nThis datasets is based on the 'amazon_reviews_multi' dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Product Reviews\n\nCustomer reviews of Amazon products, categorised by the number of stars assigned to each product. The dataset consists of several thousand reviews in English, German, and French.",
"## Licensing information\n\nThis datasets is based on the 'amazon_reviews_multi' dataset."
] |
f4f954f99f54f4a8261f1ab7b28469550c4bceeb |
# Ao Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by ao_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ec8MaO4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/N4IRulK.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/22alJny.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ZPPIs9L.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/XQZvjGs.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/ao_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T14:28:24+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T10:16:29+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Ao Artist Embedding / Textual Inversion
=======================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 10k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
7f557c5d4da73b73ea90c3e0ab9663484f25b992 |
# Mikeou Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by mikeou_art"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Anc83EO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NukXbXO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/LcamHiI.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/sHL81zL.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/vrfu8WV.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/mikeou_art | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T14:29:59+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T10:18:34+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Mikeou Artist Embedding / Textual Inversion
===========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 10k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
9961aeb4e5e069a1760792883bbb4df34eb03fad | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: ARTeLab/ilpost
* Config: ARTeLab--ilpost
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-ARTeLab__ilpost-ARTeLab__ilpost-d2ea00-1904764775 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-27T14:40:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ARTeLab/ilpost"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-ilpost", "metrics": ["bertscore"], "dataset_name": "ARTeLab/ilpost", "dataset_config": "ARTeLab--ilpost", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-10-27T14:44:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: ARTeLab/ilpost
* Config: ARTeLab--ilpost
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @morenolq for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-ilpost\n* Dataset: ARTeLab/ilpost\n* Config: ARTeLab--ilpost\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @morenolq for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-ilpost\n* Dataset: ARTeLab/ilpost\n* Config: ARTeLab--ilpost\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @morenolq for evaluating this model."
] |
8ab5d278ab48d4d9943fca87fbaf33774faf65e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: ARTeLab/fanpage
* Config: ARTeLab--fanpage
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-ARTeLab__fanpage-ARTeLab__fanpage-6c7fce-1904864776 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-27T14:40:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ARTeLab/fanpage"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-fanpage", "metrics": ["bertscore"], "dataset_name": "ARTeLab/fanpage", "dataset_config": "ARTeLab--fanpage", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-10-27T14:47:53+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: ARTeLab/fanpage
* Config: ARTeLab--fanpage
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @morenolq for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: ARTeLab/fanpage\n* Config: ARTeLab--fanpage\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @morenolq for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: ARTeLab/fanpage\n* Config: ARTeLab--fanpage\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @morenolq for evaluating this model."
] |
4da865e1b2019c88a45f920e7c8896be5c86033d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-mlsum
* Dataset: ARTeLab/mlsum-it
* Config: ARTeLab--mlsum-it
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-ARTeLab__mlsum-it-ARTeLab__mlsum-it-b0baa7-1904964782 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-27T14:52:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ARTeLab/mlsum-it"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-mlsum", "metrics": ["bertscore"], "dataset_name": "ARTeLab/mlsum-it", "dataset_config": "ARTeLab--mlsum-it", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-10-27T14:55:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-mlsum
* Dataset: ARTeLab/mlsum-it
* Config: ARTeLab--mlsum-it
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @morenolq for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-mlsum\n* Dataset: ARTeLab/mlsum-it\n* Config: ARTeLab--mlsum-it\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @morenolq for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-mlsum\n* Dataset: ARTeLab/mlsum-it\n* Config: ARTeLab--mlsum-it\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @morenolq for evaluating this model."
] |
8e4d20db185e50b3a66dcaa7f87468a48efedd55 | # Dataset Card for "hotel-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/jiashenliu/515k-hotel-reviews-data-in-europe) | ashraq/hotel-reviews | [
"region:us"
] | 2022-10-27T16:22:07+00:00 | {"dataset_info": {"features": [{"name": "review_date", "dtype": "string"}, {"name": "hotel_name", "dtype": "string"}, {"name": "review", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15043294, "num_examples": 93757}], "download_size": 6100544, "dataset_size": 15043294}} | 2022-10-27T16:24:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "hotel-reviews"
More Information needed
Data was obtained from here | [
"# Dataset Card for \"hotel-reviews\"\n\nMore Information needed\n\nData was obtained from here"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"hotel-reviews\"\n\nMore Information needed\n\nData was obtained from here"
] |
1ed13e8ef280bd45e3bbac4cfa8bbd9d64ec9f89 | # Dataset Card for Naruto BLIP captions
_Dataset used to train [TBD](TBD)._
The original images were obtained from [narutopedia.com](https://naruto.fandom.com/wiki/Narutopedia) and captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Example stable diffusion outputs

> "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
## Citation
If you use this dataset, please cite it as:
```
@misc{cervenka2022naruto2,
author = {Cervenka, Eole},
title = {Naruto BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/lambdalabs/naruto-blip-captions/}}
}
``` | lambdalabs/naruto-blip-captions | [
"region:us"
] | 2022-10-27T17:02:46+00:00 | {} | 2022-10-27T20:17:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for Naruto BLIP captions
_Dataset used to train TBD._
The original images were obtained from URL and captioned with the pre-trained BLIP model.
For each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.
## Example stable diffusion outputs
!URL
> "Bill Gates with a hoodie", "John Oliver with Naruto style", "Hello Kitty with Naruto style", "Lebron James with a hat", "Mickael Jackson as a ninja", "Banksy Street art of ninja"
If you use this dataset, please cite it as:
| [
"# Dataset Card for Naruto BLIP captions\n\n_Dataset used to train TBD._\n\nThe original images were obtained from URL and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.",
"## Example stable diffusion outputs\n\n!URL\n> \"Bill Gates with a hoodie\", \"John Oliver with Naruto style\", \"Hello Kitty with Naruto style\", \"Lebron James with a hat\", \"Mickael Jackson as a ninja\", \"Banksy Street art of ninja\"\n\nIf you use this dataset, please cite it as:"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Naruto BLIP captions\n\n_Dataset used to train TBD._\n\nThe original images were obtained from URL and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.",
"## Example stable diffusion outputs\n\n!URL\n> \"Bill Gates with a hoodie\", \"John Oliver with Naruto style\", \"Hello Kitty with Naruto style\", \"Lebron James with a hat\", \"Mickael Jackson as a ninja\", \"Banksy Street art of ninja\"\n\nIf you use this dataset, please cite it as:"
] |
29d8c48af080c04fc9e645d72cae49b38866026c | # Dataset Card for "reqs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hasanriaz121/reqs | [
"region:us"
] | 2022-10-27T17:05:57+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "requirement_txt", "dtype": "string"}, {"name": "EF", "dtype": "int64"}, {"name": "PE", "dtype": "int64"}, {"name": "PO", "dtype": "int64"}, {"name": "RE", "dtype": "int64"}, {"name": "SE", "dtype": "int64"}, {"name": "US", "dtype": "int64"}, {"name": "X", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 53980, "num_examples": 285}, {"name": "train", "num_bytes": 431941, "num_examples": 2308}, {"name": "validation", "num_bytes": 49251, "num_examples": 257}], "download_size": 218916, "dataset_size": 535172}} | 2022-10-27T17:06:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "reqs"
More Information needed | [
"# Dataset Card for \"reqs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"reqs\"\n\nMore Information needed"
] |
4788cd2a26eae8a1e6534d87b1bfbad82c3a9dc2 |
# Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/amazon-science/mintaka
- **Repository:** https://github.com/amazon-science/mintaka
- **Paper:** https://aclanthology.org/2022.coling-1.138/
- **Point of Contact:** [GitHub](https://github.com/amazon-science/mintaka)
### Dataset Summary
Mintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: https://aclanthology.org/2022.coling-1.138/
To build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:
- Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)
- Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)
- Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)
- Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)
- Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)
- Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)
- Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)
- Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)
- Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)
- We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History
Mintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for question answering.
To ensure comparability, please refer to our evaluation script here: https://github.com/amazon-science/mintaka#evaluation
### Languages
All questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"id": "a9011ddf",
"lang": "en",
"question": "What is the seventh tallest mountain in North America?",
"answerText": "Mount Lucania",
"category": "geography",
"complexityType": "ordinal",
"questionEntity":
[
{
"name": "Q49",
"entityType": "entity",
"label": "North America",
"mention": "North America",
"span": [40, 53]
},
{
"name": 7,
"entityType": "ordinal",
"mention": "seventh",
"span": [12, 19]
}
],
"answerEntity":
[
{
"name": "Q1153188",
"label": "Mount Lucania",
}
],
}
```
### Data Fields
The data fields are the same among all splits.
`id`: a unique ID for the given sample.
`lang`: the language of the question.
`question`: the original question elicited in the corresponding language.
`answerText`: the original answer text elicited in English.
`category`: the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports
`complexityType`: the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic
`questionEntity`: a list of annotated question entities identified by crowd workers.
```
{
"name": The Wikidata Q-code or numerical value of the entity
"entityType": The type of the entity. Options are:
entity, cardinal, ordinal, date, time, percent, quantity, or money
"label": The label of the Wikidata Q-code
"mention": The entity as it appears in the English question text. Will be empty for non-English samples.
"span": The start and end characters of the mention in the English question text. Will be empty for non-English samples.
}
```
`answerEntity`: a list of annotated answer entities identified by crowd workers.
```
{
"name": The Wikidata Q-code or numerical value of the entity
"label": The label of the Wikidata Q-code
}
```
### Data Splits
For each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
Amazon Alexa AI.
### Licensing Information
This project is licensed under the CC-BY-4.0 License.
### Citation Information
Please cite the following papers when using this dataset.
```latex
@inproceedings{sen-etal-2022-mintaka,
title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
author = "Sen, Priyanka and
Aji, Alham Fikri and
Saffari, Amir",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.138",
pages = "1604--1619"
}
```
### Contributions
Thanks to [@afaji](https://github.com/afaji) for adding this dataset. | AmazonScience/mintaka | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:ar",
"multilinguality:de",
"multilinguality:ja",
"multilinguality:hi",
"multilinguality:pt",
"multilinguality:en",
"multilinguality:es",
"multilinguality:it",
"multilinguality:fr",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:cc-by-4.0",
"region:us"
] | 2022-10-27T17:38:30+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "license": ["cc-by-4.0"], "multilinguality": ["ar", "de", "ja", "hi", "pt", "en", "es", "it", "fr"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "mintaka", "pretty_name": "Mintaka", "language_bcp47": ["ar-SA", "de-DE", "ja-JP", "hi-HI", "pt-PT", "en-EN", "es-ES", "it-IT", "fr-FR"]} | 2022-10-28T09:55:50+00:00 | [] | [] | TAGS
#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-ar #multilinguality-de #multilinguality-ja #multilinguality-hi #multilinguality-pt #multilinguality-en #multilinguality-es #multilinguality-it #multilinguality-fr #size_categories-100K<n<1M #source_datasets-original #license-cc-by-4.0 #region-us
|
# Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Point of Contact: GitHub
### Dataset Summary
Mintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: URL
To build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:
- Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)
- Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)
- Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)
- Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)
- Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)
- Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)
- Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)
- Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)
- Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)
- We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History
Mintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for question answering.
To ensure comparability, please refer to our evaluation script here: URL
### Languages
All questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
'id': a unique ID for the given sample.
'lang': the language of the question.
'question': the original question elicited in the corresponding language.
'answerText': the original answer text elicited in English.
'category': the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports
'complexityType': the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic
'questionEntity': a list of annotated question entities identified by crowd workers.
'answerEntity': a list of annotated answer entities identified by crowd workers.
### Data Splits
For each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Amazon Alexa AI.
### Licensing Information
This project is licensed under the CC-BY-4.0 License.
Please cite the following papers when using this dataset.
### Contributions
Thanks to @afaji for adding this dataset. | [
"# Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: GitHub",
"### Dataset Summary\n\nMintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: URL\n\nTo build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:\n\n- Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)\n- Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)\n- Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)\n- Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)\n- Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)\n- Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)\n- Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)\n- Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)\n- Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)\n- We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History\n\nMintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for question answering.\nTo ensure comparability, please refer to our evaluation script here: URL",
"### Languages\n\nAll questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.",
"## Dataset Structure",
"### Data Instances\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\nThe data fields are the same among all splits.\n\n'id': a unique ID for the given sample.\n\n'lang': the language of the question. \n\n'question': the original question elicited in the corresponding language.\n\n'answerText': the original answer text elicited in English.\n\n'category': the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports\n\n'complexityType': the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic\n\n'questionEntity': a list of annotated question entities identified by crowd workers.\n\n'answerEntity': a list of annotated answer entities identified by crowd workers.",
"### Data Splits\n\nFor each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.",
"### Personal and Sensitive Information\n\nThe corpora is free of personal or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nAmazon Alexa AI.",
"### Licensing Information\n\nThis project is licensed under the CC-BY-4.0 License.\n\n\n\nPlease cite the following papers when using this dataset.",
"### Contributions\n\nThanks to @afaji for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-ar #multilinguality-de #multilinguality-ja #multilinguality-hi #multilinguality-pt #multilinguality-en #multilinguality-es #multilinguality-it #multilinguality-fr #size_categories-100K<n<1M #source_datasets-original #license-cc-by-4.0 #region-us \n",
"# Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: GitHub",
"### Dataset Summary\n\nMintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: URL\n\nTo build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:\n\n- Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)\n- Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)\n- Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)\n- Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)\n- Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)\n- Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)\n- Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)\n- Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)\n- Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)\n- We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History\n\nMintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for question answering.\nTo ensure comparability, please refer to our evaluation script here: URL",
"### Languages\n\nAll questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.",
"## Dataset Structure",
"### Data Instances\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\nThe data fields are the same among all splits.\n\n'id': a unique ID for the given sample.\n\n'lang': the language of the question. \n\n'question': the original question elicited in the corresponding language.\n\n'answerText': the original answer text elicited in English.\n\n'category': the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports\n\n'complexityType': the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic\n\n'questionEntity': a list of annotated question entities identified by crowd workers.\n\n'answerEntity': a list of annotated answer entities identified by crowd workers.",
"### Data Splits\n\nFor each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.",
"### Personal and Sensitive Information\n\nThe corpora is free of personal or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nAmazon Alexa AI.",
"### Licensing Information\n\nThis project is licensed under the CC-BY-4.0 License.\n\n\n\nPlease cite the following papers when using this dataset.",
"### Contributions\n\nThanks to @afaji for adding this dataset."
] |
61b99919bdf522fee905ba7f3e3e8b67e58e80e5 | # Dataset Card for "early_printed_books_font_detection_loaded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | biglam/early_printed_books_font_detection_loaded | [
"region:us"
] | 2022-10-27T19:07:55+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "greek", "1": "antiqua", "2": "other_font", "3": "not_a_font", "4": "italic", "5": "rotunda", "6": "textura", "7": "fraktur", "8": "schwabacher", "9": "hebrew", "10": "bastarda", "11": "gotico_antiqua"}}}}], "splits": [{"name": "test", "num_bytes": 11398084794.636, "num_examples": 10757}, {"name": "train", "num_bytes": 21512059165.866, "num_examples": 24866}], "download_size": 44713803337, "dataset_size": 32910143960.502}} | 2022-10-28T07:47:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "early_printed_books_font_detection_loaded"
More Information needed | [
"# Dataset Card for \"early_printed_books_font_detection_loaded\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"early_printed_books_font_detection_loaded\"\n\nMore Information needed"
] |
d46098f2cd8b030fe0d6c9e5fe32e0e47aaad681 | <h4> Disclosure </h4>
<p> While its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by skeleton slime </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by skeleton slime- 6500</em></li>
<li>10,000 steps <em>Usage: art by skeleton slime-10000</em> </li>
<li>15,000 steps <em>Usage: art by skeleton slime</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/ATm5o4H.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/DpdwiyC.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/qwGmnel.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://i.imgur.com/SF3kfd4.jpg" target="_blank"><img height="100%" width="100%" src="https://i.imgur.com/SF3kfd4.jpg"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> | zZWipeoutZz/skeleton_slime | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-27T20:21:30+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-28T08:48:03+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| #### Disclosure
While its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know
#### Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
*art by skeleton slime*
add **[ ]** around it to reduce its weight.
#### Included Files
* 6500 steps *Usage: art by skeleton slime- 6500*
* 10,000 steps *Usage: art by skeleton slime-10000*
* 15,000 steps *Usage: art by skeleton slime*
cheers
Wipeout
#### Example Pictures
#### prompt comparison
[<img height="100%" width="100%" src="https://i.URL
<h4> Licence
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="URL read the full license here</a>](https://i.URL target=) | [
"#### Disclosure\n\n\n While its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by skeleton slime* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by skeleton slime- 6500*\n* 10,000 steps *Usage: art by skeleton slime-10000*\n* 15,000 steps *Usage: art by skeleton slime*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n[<img height=\"100%\" width=\"100%\" src=\"https://i.URL\n<h4> Licence \nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: \n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\n<a rel=\"noopener nofollow\" href=\"URL read the full license here</a>](https://i.URL target=)"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"#### Disclosure\n\n\n While its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by skeleton slime* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by skeleton slime- 6500*\n* 10,000 steps *Usage: art by skeleton slime-10000*\n* 15,000 steps *Usage: art by skeleton slime*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n[<img height=\"100%\" width=\"100%\" src=\"https://i.URL\n<h4> Licence \nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: \n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\n<a rel=\"noopener nofollow\" href=\"URL read the full license here</a>](https://i.URL target=)"
] |
ff3d266876d88b216558abbb04575e2efe7a252b | # Dataset Card for "tydiqa_secondary_task"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Mostafa3zazi/tydiqa_secondary_task | [
"region:us"
] | 2022-10-27T21:52:22+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 52948607, "num_examples": 49881}, {"name": "validation", "num_bytes": 5006461, "num_examples": 5077}], "download_size": 29688806, "dataset_size": 57955068}} | 2022-10-27T21:52:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tydiqa_secondary_task"
More Information needed | [
"# Dataset Card for \"tydiqa_secondary_task\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tydiqa_secondary_task\"\n\nMore Information needed"
] |
f364ba93d5e59758672fdf2ff59b4a505ab3caba | # Dataset Card for "eurosat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vicm0r/eurosat | [
"region:us"
] | 2022-10-27T23:17:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AnnualCrop", "1": "Forest", "2": "HerbaceousVegetation", "3": "Highway", "4": "Industrial", "5": "Pasture", "6": "PermanentCrop", "7": "Residential", "8": "River", "9": "SeaLake"}}}}], "splits": [{"name": "train", "num_bytes": 57259856.0, "num_examples": 27000}], "download_size": 88186968, "dataset_size": 57259856.0}} | 2022-10-27T23:17:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "eurosat"
More Information needed | [
"# Dataset Card for \"eurosat\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"eurosat\"\n\nMore Information needed"
] |
1914ab53af43442e03b97a42d1fc6ba76e04bf53 | # Dataset Card for "Human_obj_bg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TeddyCat/Human_obj_bg | [
"region:us"
] | 2022-10-28T02:25:32+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 350102.0, "num_examples": 20}], "download_size": 337556, "dataset_size": 350102.0}} | 2022-12-18T05:02:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Human_obj_bg"
More Information needed | [
"# Dataset Card for \"Human_obj_bg\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Human_obj_bg\"\n\nMore Information needed"
] |
443f28582af7d75148a31c76a300efa4b5b0108a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164906 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T03:21:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
7f7e1e829257c402b1de674dcae98afac66756de | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164909 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T05:25:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
77fee1ab3232c91e763d3505780ec8e6b633e065 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164903 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T03:08:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
ef0156d81134002a97402df78322bb674e400708 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164908 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:06:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T04:06:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
f130023e49e8c83786974b72fc1852c574028a83 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-6c03d1-1913164902 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T03:07:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T03:08:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
2acaa832b1e781b8a91915bdbc119828f71b5556 |
# Dataset Card for SyNLI
A synthetic NLI datasets from open domain sentences using T5 as data synthesizer. The data can be used to train sentence embedding models.
## Data Fields
The data have several fields:
- `sent0`: premise as a string
- `sent1`: entailment hypothesis as a string
- `hard_neg`: contradiction hypothesis as a string
| mattymchen/synli | [
"license:odc-by",
"region:us"
] | 2022-10-28T04:23:23+00:00 | {"license": "odc-by", "dataset_info": {"features": [{"name": "sent0", "dtype": "string"}, {"name": "sent1", "dtype": "string"}, {"name": "hard_neg", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11441750654, "num_examples": 60939492}], "download_size": 6904073153, "dataset_size": 11441750654}} | 2022-10-28T07:52:16+00:00 | [] | [] | TAGS
#license-odc-by #region-us
|
# Dataset Card for SyNLI
A synthetic NLI datasets from open domain sentences using T5 as data synthesizer. The data can be used to train sentence embedding models.
## Data Fields
The data have several fields:
- 'sent0': premise as a string
- 'sent1': entailment hypothesis as a string
- 'hard_neg': contradiction hypothesis as a string
| [
"# Dataset Card for SyNLI\nA synthetic NLI datasets from open domain sentences using T5 as data synthesizer. The data can be used to train sentence embedding models.",
"## Data Fields\nThe data have several fields:\n- 'sent0': premise as a string\n- 'sent1': entailment hypothesis as a string\n- 'hard_neg': contradiction hypothesis as a string"
] | [
"TAGS\n#license-odc-by #region-us \n",
"# Dataset Card for SyNLI\nA synthetic NLI datasets from open domain sentences using T5 as data synthesizer. The data can be used to train sentence embedding models.",
"## Data Fields\nThe data have several fields:\n- 'sent0': premise as a string\n- 'sent1': entailment hypothesis as a string\n- 'hard_neg': contradiction hypothesis as a string"
] |
a4d47050c1f1a90dc09c8920cd66ebc1e1523ca0 | # Dataset Card for "Romance-cleaned-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Romance-cleaned-2 | [
"region:us"
] | 2022-10-28T06:20:14+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3407789.8839248433, "num_examples": 6466}, {"name": "validation", "num_bytes": 378936.11607515655, "num_examples": 719}], "download_size": 2403265, "dataset_size": 3786726.0}} | 2022-10-28T06:20:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Romance-cleaned-2"
More Information needed | [
"# Dataset Card for \"Romance-cleaned-2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Romance-cleaned-2\"\n\nMore Information needed"
] |
b600bc01160467f3102f821deadf0e130637f94e | # Dataset Card for "latent_lsun_church_256px"
This is derived from https://huggingface.co/datasets/tglcourse/lsun_church_train
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
```
| tglcourse/latent_lsun_church_256px | [
"region:us"
] | 2022-10-28T06:45:35+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "a", "11": "b", "12": "c", "13": "d", "14": "e", "15": "f"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "test", "num_bytes": 106824288, "num_examples": 6312}, {"name": "train", "num_bytes": 2029441460, "num_examples": 119915}], "download_size": 2082210019, "dataset_size": 2136265748}} | 2022-10-28T06:57:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "latent_lsun_church_256px"
This is derived from URL
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
| [
"# Dataset Card for \"latent_lsun_church_256px\"\n\nThis is derived from URL\n\nEach image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent_lsun_church_256px\"\n\nThis is derived from URL\n\nEach image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] |
30044e415f19965e2435434396f050322bca523f | # Dataset Card for "uniprot_sprot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/uniprot_sprot | [
"region:us"
] | 2022-10-28T08:09:42+00:00 | {"dataset_info": {"features": [{"name": "uniprot_id", "dtype": "string"}, {"name": "sequences", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 21314102.893347207, "num_examples": 56801}, {"name": "train", "num_bytes": 191823924.1066528, "num_examples": 511201}], "download_size": 211969427, "dataset_size": 213138027.0}} | 2022-10-30T12:44:58+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "uniprot_sprot"
More Information needed | [
"# Dataset Card for \"uniprot_sprot\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"uniprot_sprot\"\n\nMore Information needed"
] |
e40d5764be1040bac56f49cea5df9d243e8d904b | # Dataset Card for "latent_afhqv2_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_afhqv2_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_afhqv2_256px | [
"region:us"
] | 2022-10-28T08:19:16+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog", "2": "wild"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 267449972, "num_examples": 15803}], "download_size": 260672854, "dataset_size": 267449972}} | 2022-10-28T10:51:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "latent_afhqv2_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
| [
"# Dataset Card for \"latent_afhqv2_256px\"\n\nEach image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent_afhqv2_256px\"\n\nEach image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] |
39c63d396a8b291a2387b8499c84e7a3c4f3f451 |
# Dataset Card for [naacl2022]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a named entity recognition dataset annotated for the science entity recognition task, a [project](https://github.com/neubig/nlp-from-scratch-assignment-2022) from the CMU 11-711 course.
### Supported Tasks and Leaderboards
NER task.
### Languages
English
## Dataset Structure
### Data Instances
A sample of the dataset
{'id': '0',
'tokens': ['We', 'sample', '50', 'negative', 'cases', 'from', 'T5LARGE', '+', 'GenMC', 'for', 'each', 'dataset'],
'ner_tags':['O', 'O', 'O', 'O', 'O', 'O', 'B-MethodName', 'O', 'B-MethodName', 'O', 'O', 'O']}
### Data Fields
id,tokens,ner_tags
- `id`: a `string` feature give the sample index.
- `tokens`: a `list` of `string` features give the sequence.
- `ner_tags`: a `list` of classification labels for each token in the sentence, with possible values including
`O` (0), `B-MethodName` (1), `I-MethodName` (2), `B-HyperparameterName` (3),`I-HyperparameterName` (4),`B-HyperparameterValue` (5),`I-HyperparameterValue` (6),`B-MetricName` (7),`I-MetricName` (8),`B-MetricValue` (9),`I-MetricValue` (10),`B-TaskName` (11),`I-TaskName` (12),`B-DatasetName` (13),`I-DatasetName` (14).
### Data Splits
Data split into
train.txt
dev.txt
test.txt
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The data is annotated by using labelstudio, the papers are collected from TACL and ACL 2022 conferences.
#### Who are the annotators?
Xiaoyue Cui and Haotian Teng annotated the datasets.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@xcui297](https://github.com/xcui297); [@haotianteng](https://github.com/haotianteng) for adding this dataset.
| havens2/naacl2022 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"acl",
"sciBERT",
"sci",
"11711",
"region:us"
] | 2022-10-28T08:38:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "sci_NER_naacl", "tags": ["acl", "sciBERT", "sci", "acl", "11711"]} | 2022-10-28T10:37:16+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #acl #sciBERT #sci #11711 #region-us
|
# Dataset Card for [naacl2022]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This is a named entity recognition dataset annotated for the science entity recognition task, a project from the CMU 11-711 course.
### Supported Tasks and Leaderboards
NER task.
### Languages
English
## Dataset Structure
### Data Instances
A sample of the dataset
{'id': '0',
'tokens': ['We', 'sample', '50', 'negative', 'cases', 'from', 'T5LARGE', '+', 'GenMC', 'for', 'each', 'dataset'],
'ner_tags':['O', 'O', 'O', 'O', 'O', 'O', 'B-MethodName', 'O', 'B-MethodName', 'O', 'O', 'O']}
### Data Fields
id,tokens,ner_tags
- 'id': a 'string' feature give the sample index.
- 'tokens': a 'list' of 'string' features give the sequence.
- 'ner_tags': a 'list' of classification labels for each token in the sentence, with possible values including
'O' (0), 'B-MethodName' (1), 'I-MethodName' (2), 'B-HyperparameterName' (3),'I-HyperparameterName' (4),'B-HyperparameterValue' (5),'I-HyperparameterValue' (6),'B-MetricName' (7),'I-MetricName' (8),'B-MetricValue' (9),'I-MetricValue' (10),'B-TaskName' (11),'I-TaskName' (12),'B-DatasetName' (13),'I-DatasetName' (14).
### Data Splits
Data split into
URL
URL
URL
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
The data is annotated by using labelstudio, the papers are collected from TACL and ACL 2022 conferences.
#### Who are the annotators?
Xiaoyue Cui and Haotian Teng annotated the datasets.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @xcui297; @haotianteng for adding this dataset.
| [
"# Dataset Card for [naacl2022]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis is a named entity recognition dataset annotated for the science entity recognition task, a project from the CMU 11-711 course.",
"### Supported Tasks and Leaderboards\n\nNER task.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nA sample of the dataset\n{'id': '0',\n'tokens': ['We', 'sample', '50', 'negative', 'cases', 'from', 'T5LARGE', '+', 'GenMC', 'for', 'each', 'dataset'],\n'ner_tags':['O', 'O', 'O', 'O', 'O', 'O', 'B-MethodName', 'O', 'B-MethodName', 'O', 'O', 'O']}",
"### Data Fields\n\nid,tokens,ner_tags\n\n- 'id': a 'string' feature give the sample index.\n- 'tokens': a 'list' of 'string' features give the sequence.\n- 'ner_tags': a 'list' of classification labels for each token in the sentence, with possible values including \n'O' (0), 'B-MethodName' (1), 'I-MethodName' (2), 'B-HyperparameterName' (3),'I-HyperparameterName' (4),'B-HyperparameterValue' (5),'I-HyperparameterValue' (6),'B-MetricName' (7),'I-MetricName' (8),'B-MetricValue' (9),'I-MetricValue' (10),'B-TaskName' (11),'I-TaskName' (12),'B-DatasetName' (13),'I-DatasetName' (14).",
"### Data Splits\n\nData split into\nURL\nURL\nURL",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe data is annotated by using labelstudio, the papers are collected from TACL and ACL 2022 conferences.",
"#### Who are the annotators?\n\nXiaoyue Cui and Haotian Teng annotated the datasets.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @xcui297; @haotianteng for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #acl #sciBERT #sci #11711 #region-us \n",
"# Dataset Card for [naacl2022]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis is a named entity recognition dataset annotated for the science entity recognition task, a project from the CMU 11-711 course.",
"### Supported Tasks and Leaderboards\n\nNER task.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nA sample of the dataset\n{'id': '0',\n'tokens': ['We', 'sample', '50', 'negative', 'cases', 'from', 'T5LARGE', '+', 'GenMC', 'for', 'each', 'dataset'],\n'ner_tags':['O', 'O', 'O', 'O', 'O', 'O', 'B-MethodName', 'O', 'B-MethodName', 'O', 'O', 'O']}",
"### Data Fields\n\nid,tokens,ner_tags\n\n- 'id': a 'string' feature give the sample index.\n- 'tokens': a 'list' of 'string' features give the sequence.\n- 'ner_tags': a 'list' of classification labels for each token in the sentence, with possible values including \n'O' (0), 'B-MethodName' (1), 'I-MethodName' (2), 'B-HyperparameterName' (3),'I-HyperparameterName' (4),'B-HyperparameterValue' (5),'I-HyperparameterValue' (6),'B-MetricName' (7),'I-MetricName' (8),'B-MetricValue' (9),'I-MetricValue' (10),'B-TaskName' (11),'I-TaskName' (12),'B-DatasetName' (13),'I-DatasetName' (14).",
"### Data Splits\n\nData split into\nURL\nURL\nURL",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe data is annotated by using labelstudio, the papers are collected from TACL and ACL 2022 conferences.",
"#### Who are the annotators?\n\nXiaoyue Cui and Haotian Teng annotated the datasets.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @xcui297; @haotianteng for adding this dataset."
] |
eac45f711beabc481045075e3066be32ed55dc8e | # Dataset Card for "latent_afhqv2_512px"
Each image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 64, 3264
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_afhqv2_512px | [
"region:us"
] | 2022-10-28T09:21:26+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog", "2": "wild"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1052290164, "num_examples": 15803}], "download_size": 1038619876, "dataset_size": 1052290164}} | 2022-10-28T10:52:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "latent_afhqv2_512px"
Each image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
| [
"# Dataset Card for \"latent_afhqv2_512px\"\n\nEach image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent_afhqv2_512px\"\n\nEach image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] |
c5c8ed58a7134ad219a2ac61ed44427db1d26d23 |
# UD_Spanish-AnCora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/UniversalDependencies/UD_Spanish-AnCora
- **Point of Contact:** [Daniel Zeman]([email protected])
### Dataset Summary
This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
POS tagging
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
Three conllu files.
Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
2) Blank lines marking sentence boundaries.
3) Comment lines starting with hash (#).
### Data Fields
Word lines contain the following fields:
1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
2) FORM: Word form or punctuation symbol.
3) LEMMA: Lemma or stem of word form.
4) UPOS: Universal part-of-speech tag.
5) XPOS: Language-specific part-of-speech tag; underscore if not available.
6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
7) HEAD: Head of the current word, which is either a value of ID or zero (0).
8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
10) MISC: Any other annotation.
From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html)
### Data Splits
- es_ancora-ud-train.conllu
- es_ancora-ud-dev.conllu
- es_ancora-ud-test.conllu
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
[UD_Spanish-AnCora](https://github.com/UniversalDependencies/UD_Spanish-AnCora)
#### Initial Data Collection and Normalization
The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/).
To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org)
#### Who are the source language producers?
For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Annotations
#### Annotation process
For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/).
#### Who are the annotators?
For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Citation Information
The following paper must be cited when using this corpus:
Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
To cite the Universal Dependencies project:
Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
### Contributions
[N/A]
| PlanTL-GOB-ES/UD_Spanish-AnCora | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-10-28T09:30:03+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["part-of-speech"], "pretty_name": "UD_Spanish-AnCora", "tags": []} | 2022-11-17T12:07:35+00:00 | [] | [
"es"
] | TAGS
#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# UD_Spanish-AnCora
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Website: URL
- Point of Contact: Daniel Zeman
### Dataset Summary
This dataset is composed of the annotations from the AnCora corpus, projected on the Universal Dependencies treebank. We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
POS tagging
### Languages
The dataset is in Spanish ('es-ES')
## Dataset Structure
### Data Instances
Three conllu files.
Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
2) Blank lines marking sentence boundaries.
3) Comment lines starting with hash (#).
### Data Fields
Word lines contain the following fields:
1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
2) FORM: Word form or punctuation symbol.
3) LEMMA: Lemma or stem of word form.
4) UPOS: Universal part-of-speech tag.
5) XPOS: Language-specific part-of-speech tag; underscore if not available.
6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
7) HEAD: Head of the current word, which is either a value of ID or zero (0).
8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
10) MISC: Any other annotation.
From: URL
### Data Splits
- es_ancora-URL
- es_ancora-URL
- es_ancora-URL
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
UD_Spanish-AnCora
#### Initial Data Collection and Normalization
The original annotation was done in a constituency framework as a part of the AnCora project at the University of Barcelona. It was converted to dependencies by the Universal Dependencies team and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
For more information on the AnCora project, visit the AnCora site.
To learn about the Universal Dependences, visit the webpage URL
#### Who are the source language producers?
For more information on the AnCora corpus and its sources, visit the AnCora site.
### Annotations
#### Annotation process
For more information on the first AnCora annotation, visit the AnCora site.
#### Who are the annotators?
For more information on the AnCora annotation team, visit the AnCora site.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
This work is licensed under a <a rel="license" href="URL Attribution 4.0 International License</a>.
The following paper must be cited when using this corpus:
Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
To cite the Universal Dependencies project:
Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
### Contributions
[N/A]
| [
"# UD_Spanish-AnCora",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Website: URL\n- Point of Contact: Daniel Zeman",
"### Dataset Summary\n\nThis dataset is composed of the annotations from the AnCora corpus, projected on the Universal Dependencies treebank. We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.",
"### Supported Tasks and Leaderboards\n\nPOS tagging",
"### Languages\n\nThe dataset is in Spanish ('es-ES')",
"## Dataset Structure",
"### Data Instances\n\nThree conllu files.\n\nAnnotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:\n\n1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).\n2) Blank lines marking sentence boundaries.\n3) Comment lines starting with hash (#).",
"### Data Fields\nWord lines contain the following fields:\n\n1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).\n2) FORM: Word form or punctuation symbol.\n3) LEMMA: Lemma or stem of word form.\n4) UPOS: Universal part-of-speech tag.\n5) XPOS: Language-specific part-of-speech tag; underscore if not available.\n6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.\n7) HEAD: Head of the current word, which is either a value of ID or zero (0).\n8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.\n9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.\n10) MISC: Any other annotation.\n \nFrom: URL",
"### Data Splits\n\n- es_ancora-URL\n- es_ancora-URL\n- es_ancora-URL",
"## Dataset Creation",
"### Curation Rationale\n[N/A]",
"### Source Data\n\nUD_Spanish-AnCora",
"#### Initial Data Collection and Normalization\n\nThe original annotation was done in a constituency framework as a part of the AnCora project at the University of Barcelona. It was converted to dependencies by the Universal Dependencies team and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.\n\nFor more information on the AnCora project, visit the AnCora site.\n\nTo learn about the Universal Dependences, visit the webpage URL",
"#### Who are the source language producers?\n\nFor more information on the AnCora corpus and its sources, visit the AnCora site.",
"### Annotations",
"#### Annotation process\n\nFor more information on the first AnCora annotation, visit the AnCora site.",
"#### Who are the annotators?\n\nFor more information on the AnCora annotation team, visit the AnCora site.",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\n[N/A]",
"### Licensing Information\n\nThis work is licensed under a <a rel=\"license\" href=\"URL Attribution 4.0 International License</a>.\n\n\n\nThe following paper must be cited when using this corpus:\n\nTaulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).\n\nTo cite the Universal Dependencies project:\n\nRueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.",
"### Contributions\n\n[N/A]"
] | [
"TAGS\n#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# UD_Spanish-AnCora",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Website: URL\n- Point of Contact: Daniel Zeman",
"### Dataset Summary\n\nThis dataset is composed of the annotations from the AnCora corpus, projected on the Universal Dependencies treebank. We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.",
"### Supported Tasks and Leaderboards\n\nPOS tagging",
"### Languages\n\nThe dataset is in Spanish ('es-ES')",
"## Dataset Structure",
"### Data Instances\n\nThree conllu files.\n\nAnnotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:\n\n1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).\n2) Blank lines marking sentence boundaries.\n3) Comment lines starting with hash (#).",
"### Data Fields\nWord lines contain the following fields:\n\n1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).\n2) FORM: Word form or punctuation symbol.\n3) LEMMA: Lemma or stem of word form.\n4) UPOS: Universal part-of-speech tag.\n5) XPOS: Language-specific part-of-speech tag; underscore if not available.\n6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.\n7) HEAD: Head of the current word, which is either a value of ID or zero (0).\n8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.\n9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.\n10) MISC: Any other annotation.\n \nFrom: URL",
"### Data Splits\n\n- es_ancora-URL\n- es_ancora-URL\n- es_ancora-URL",
"## Dataset Creation",
"### Curation Rationale\n[N/A]",
"### Source Data\n\nUD_Spanish-AnCora",
"#### Initial Data Collection and Normalization\n\nThe original annotation was done in a constituency framework as a part of the AnCora project at the University of Barcelona. It was converted to dependencies by the Universal Dependencies team and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.\n\nFor more information on the AnCora project, visit the AnCora site.\n\nTo learn about the Universal Dependences, visit the webpage URL",
"#### Who are the source language producers?\n\nFor more information on the AnCora corpus and its sources, visit the AnCora site.",
"### Annotations",
"#### Annotation process\n\nFor more information on the first AnCora annotation, visit the AnCora site.",
"#### Who are the annotators?\n\nFor more information on the AnCora annotation team, visit the AnCora site.",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\n[N/A]",
"### Licensing Information\n\nThis work is licensed under a <a rel=\"license\" href=\"URL Attribution 4.0 International License</a>.\n\n\n\nThe following paper must be cited when using this corpus:\n\nTaulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).\n\nTo cite the Universal Dependencies project:\n\nRueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.",
"### Contributions\n\n[N/A]"
] |
51b89189df9e9a8f048f53c0e354767fd6a500f6 |
# CoNLL-NERC-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
- **Point of Contact:** [Xavier Carreras]([email protected])
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- esp.train: 273037 lines
- esp.testa: 54837 lines (used as dev)
- esp.testb: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset curators
### Licensing information
### Citation Information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
### Contributions
[N/A]
| PlanTL-GOB-ES/CoNLL-NERC-es | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"region:us"
] | 2022-10-28T09:42:01+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["part-of-speech"], "pretty_name": "CoNLL-NERC-es", "tags": []} | 2022-11-18T11:55:41+00:00 | [] | [
"es"
] | TAGS
#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #language-Spanish #region-us
|
# CoNLL-NERC-es
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Website: URL
- Point of Contact: Xavier Carreras
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish ('es-ES')
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- URL: 273037 lines
- URL: 54837 lines (used as dev)
- URL: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset curators
### Licensing information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
### Contributions
[N/A]
| [
"# CoNLL-NERC-es",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Website: URL\n- Point of Contact: Xavier Carreras",
"### Dataset Summary\n\nCoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.\n\nWe use this corpus as part of the EvalEs Spanish language benchmark.",
"### Supported Tasks and Leaderboards\n\nNamed Entity Recognition and Classification",
"### Languages\n\nThe dataset is in Spanish ('es-ES')",
"## Dataset Structure",
"### Data Instances\n\n<pre>\nEl DA O\nAbogado NC B-PER\nGeneral AQ I-PER\ndel SP I-PER\nEstado NC I-PER\n, Fc O\nDaryl VMI B-PER\nWilliams NC I-PER\n, Fc O\nsubrayó VMI O\nhoy RG O\nla DA O\nnecesidad NC O\nde SP O\ntomar VMN O\nmedidas NC O\npara SP O\nproteger VMN O\nal SP O\nsistema NC O\njudicial AQ O\naustraliano AQ O\nfrente RG O\na SP O\nuna DI O\npágina NC O\nde SP O\ninternet NC O\nque PR O\nimposibilita VMI O\nel DA O\ncumplimiento NC O\nde SP O\nlos DA O\nprincipios NC O\nbásicos AQ O\nde SP O\nla DA O\nLey NC B-MISC\n. Fp O\n</pre>",
"### Data Fields\n\nEvery file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.",
"### Data Splits\n\n- URL: 273037 lines\n- URL: 54837 lines (used as dev)\n- URL: 53049 lines (used as test)",
"## Dataset Creation",
"### Curation Rationale\n[N/A]",
"### Source Data\n\nThe data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.",
"#### Initial Data Collection and Normalization\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"#### Who are the source language producers?\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"### Annotations",
"#### Annotation process\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"#### Who are the annotators?\n\nThe annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset curators",
"### Licensing information\n\n\n\n\nThe following paper must be cited when using this corpus:\n\nErik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).",
"### Contributions\n\n[N/A]"
] | [
"TAGS\n#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #language-Spanish #region-us \n",
"# CoNLL-NERC-es",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Website: URL\n- Point of Contact: Xavier Carreras",
"### Dataset Summary\n\nCoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.\n\nWe use this corpus as part of the EvalEs Spanish language benchmark.",
"### Supported Tasks and Leaderboards\n\nNamed Entity Recognition and Classification",
"### Languages\n\nThe dataset is in Spanish ('es-ES')",
"## Dataset Structure",
"### Data Instances\n\n<pre>\nEl DA O\nAbogado NC B-PER\nGeneral AQ I-PER\ndel SP I-PER\nEstado NC I-PER\n, Fc O\nDaryl VMI B-PER\nWilliams NC I-PER\n, Fc O\nsubrayó VMI O\nhoy RG O\nla DA O\nnecesidad NC O\nde SP O\ntomar VMN O\nmedidas NC O\npara SP O\nproteger VMN O\nal SP O\nsistema NC O\njudicial AQ O\naustraliano AQ O\nfrente RG O\na SP O\nuna DI O\npágina NC O\nde SP O\ninternet NC O\nque PR O\nimposibilita VMI O\nel DA O\ncumplimiento NC O\nde SP O\nlos DA O\nprincipios NC O\nbásicos AQ O\nde SP O\nla DA O\nLey NC B-MISC\n. Fp O\n</pre>",
"### Data Fields\n\nEvery file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.",
"### Data Splits\n\n- URL: 273037 lines\n- URL: 54837 lines (used as dev)\n- URL: 53049 lines (used as test)",
"## Dataset Creation",
"### Curation Rationale\n[N/A]",
"### Source Data\n\nThe data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.",
"#### Initial Data Collection and Normalization\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"#### Who are the source language producers?\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"### Annotations",
"#### Annotation process\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"#### Who are the annotators?\n\nThe annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).\n\nFor more information visit the paper from the CoNLL-2002 Shared Task (Tjong Kim Sang, 2002).",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset curators",
"### Licensing information\n\n\n\n\nThe following paper must be cited when using this corpus:\n\nErik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).",
"### Contributions\n\n[N/A]"
] |
8ebccbfbb024e9f07a36c44ca2ddea0165d2c261 | # Dataset Card for "latent_lsun_church_128px"
Each image is cropped to 128px square and encoded to a 4x16x16 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_128px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 16, 16)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_lsun_church_128px | [
"region:us"
] | 2022-10-28T09:48:21+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "a", "11": "b", "12": "c", "13": "d", "14": "e", "15": "f"}}}}, {"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "test", "num_bytes": 27646560, "num_examples": 6312}, {"name": "train", "num_bytes": 525227700, "num_examples": 119915}], "download_size": 527167710, "dataset_size": 552874260}} | 2022-10-28T10:50:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "latent_lsun_church_128px"
Each image is cropped to 128px square and encoded to a 4x16x16 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
| [
"# Dataset Card for \"latent_lsun_church_128px\"\n\nEach image is cropped to 128px square and encoded to a 4x16x16 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent_lsun_church_128px\"\n\nEach image is cropped to 128px square and encoded to a 4x16x16 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] |
307d2b5f10d43d92df35bc38dd08d6b2551e85f2 | # AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
- [GitHub Repository of the Paper](https://github.com/bonaventuredossou/MLM_AL)
This repository contains the dataset for our paper [`AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages`](https://arxiv.org/pdf/2211.03263.pdf) which will appear at the third Simple and Efficient Natural Language Processing, at EMNLP 2022.
## Our self-active learning framework

## Languages Covered
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
## Evaluation Results
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets; on text classification and sentiment analysis. AfroLM outperformed AfriBERTa, mBERT, and XLMR-base, and was very competitive with AfroXLMR. AfroLM is also very data efficient because it was pretrained on a dataset 14x+ smaller than its competitors' datasets. Below the average F1-score performances of various models, across various datasets. Please consult our paper for more language-level performance.
Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Sentiment Analysis (YOSM) | OOD Sentiment Analysis (Twitter -> YOSM) |
|:---: |:---: |:---: | :---: |:---: | :---: |
`AfroLM-Large` | **80.13** | **83.26** | **82.90/91.00** | **85.40** | **68.70** |
`AfriBERTa` | 79.10 | 81.31 | 83.22/90.86 | 82.70 | 65.90 |
`mBERT` | 71.55 | 80.68 | --- | --- | --- |
`XLMR-base` | 79.16 | 83.09 | --- | --- | --- |
`AfroXLMR-base` | `81.90` | `84.55` | --- | --- | --- |
- (*) The evaluation was made on the 11 additional languages of the dataset.
- Bold numbers represent the performance of the model with the **smallest pretrained data**.
## Pretrained Models and Dataset
**Models:**: [AfroLM-Large](https://huggingface.co/bonadossou/afrolm_active_learning) and **Dataset**: [AfroLM Dataset](https://huggingface.co/datasets/bonadossou/afrolm_active_learning_dataset)
## HuggingFace usage of AfroLM-large
```python
from transformers import XLMRobertaModel, XLMRobertaTokenizer
model = XLMRobertaModel.from_pretrained("bonadossou/afrolm_active_learning")
tokenizer = XLMRobertaTokenizer.from_pretrained("bonadossou/afrolm_active_learning")
tokenizer.model_max_length = 256
```
`Autotokenizer` class does not successfully load our tokenizer. So we recommend using directly the `XLMRobertaTokenizer` class. Depending on your task, you will load the according mode of the model. Read the [XLMRoberta Documentation](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)
## Reproducing our result: Training and Evaluation
- To train the network, run `python active_learning.py`. You can also wrap it around a `bash` script.
- For the evaluation:
- NER Classification: `bash ner_experiments.sh`
- Text Classification & Sentiment Analysis: `bash text_classification_all.sh`
## Citation
``@inproceedings{dossou-etal-2022-afrolm,
title = "{A}fro{LM}: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 {A}frican Languages",
author = "Dossou, Bonaventure F. P. and
Tonja, Atnafu Lambebo and
Yousuf, Oreen and
Osei, Salomey and
Oppong, Abigail and
Shode, Iyanuoluwa and
Awoyomi, Oluwabusayo Olufunke and
Emezue, Chris",
booktitle = "Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.sustainlp-1.11",
pages = "52--64",}``
## Reach out
Do you have a question? Please create an issue and we will reach out as soon as possible | bonadossou/afrolm_active_learning_dataset | [
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:amh",
"language:orm",
"language:lin",
"language:hau",
"language:ibo",
"language:kin",
"language:lug",
"language:luo",
"language:pcm",
"language:swa",
"language:wol",
"language:yor",
"language:bam",
"language:bbj",
"language:ewe",
"language:fon",
"language:mos",
"language:nya",
"language:sna",
"language:tsn",
"language:twi",
"language:xho",
"language:zul",
"license:cc-by-4.0",
"afrolm",
"active learning",
"language modeling",
"research papers",
"natural language processing",
"self-active learning",
"arxiv:2211.03263",
"region:us"
] | 2022-10-28T10:07:51+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["amh", "orm", "lin", "hau", "ibo", "kin", "lug", "luo", "pcm", "swa", "wol", "yor", "bam", "bbj", "ewe", "fon", "mos", "nya", "sna", "tsn", "twi", "xho", "zul"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "afrolm-dataset", "tags": ["afrolm", "active learning", "language modeling", "research papers", "natural language processing", "self-active learning"]} | 2023-03-29T17:10:21+00:00 | [
"2211.03263"
] | [
"amh",
"orm",
"lin",
"hau",
"ibo",
"kin",
"lug",
"luo",
"pcm",
"swa",
"wol",
"yor",
"bam",
"bbj",
"ewe",
"fon",
"mos",
"nya",
"sna",
"tsn",
"twi",
"xho",
"zul"
] | TAGS
#task_categories-fill-mask #task_ids-masked-language-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Amharic #language-Oromo #language-Lingala #language-Hausa #language-Igbo #language-Kinyarwanda #language-Ganda #language-Luo (Kenya and Tanzania) #language-Nigerian Pidgin #language-Swahili (macrolanguage) #language-Wolof #language-Yoruba #language-Bambara #language-Ghomálá' #language-Ewe #language-Fon #language-Mossi #language-Nyanja #language-Shona #language-Tswana #language-Twi #language-Xhosa #language-Zulu #license-cc-by-4.0 #afrolm #active learning #language modeling #research papers #natural language processing #self-active learning #arxiv-2211.03263 #region-us
| AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
====================================================================================================
* GitHub Repository of the Paper
This repository contains the dataset for our paper 'AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages' which will appear at the third Simple and Efficient Natural Language Processing, at EMNLP 2022.
Our self-active learning framework
----------------------------------
!Model
Languages Covered
-----------------
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
Evaluation Results
------------------
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets; on text classification and sentiment analysis. AfroLM outperformed AfriBERTa, mBERT, and XLMR-base, and was very competitive with AfroXLMR. AfroLM is also very data efficient because it was pretrained on a dataset 14x+ smaller than its competitors' datasets. Below the average F1-score performances of various models, across various datasets. Please consult our paper for more language-level performance.
* (\*) The evaluation was made on the 11 additional languages of the dataset.
* Bold numbers represent the performance of the model with the smallest pretrained data.
Pretrained Models and Dataset
-----------------------------
Models:: AfroLM-Large and Dataset: AfroLM Dataset
HuggingFace usage of AfroLM-large
---------------------------------
'Autotokenizer' class does not successfully load our tokenizer. So we recommend using directly the 'XLMRobertaTokenizer' class. Depending on your task, you will load the according mode of the model. Read the XLMRoberta Documentation
Reproducing our result: Training and Evaluation
-----------------------------------------------
* To train the network, run 'python active\_learning.py'. You can also wrap it around a 'bash' script.
* For the evaluation:
+ NER Classification: 'bash ner\_experiments.sh'
+ Text Classification & Sentiment Analysis: 'bash text\_classification\_all.sh'
''@inproceedings{dossou-etal-2022-afrolm,
title = "{A}fro{LM}: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 {A}frican Languages",
author = "Dossou, Bonaventure F. P. and
Tonja, Atnafu Lambebo and
Yousuf, Oreen and
Osei, Salomey and
Oppong, Abigail and
Shode, Iyanuoluwa and
Awoyomi, Oluwabusayo Olufunke and
Emezue, Chris",
booktitle = "Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "URL
pages = "52--64",}''
Reach out
---------
Do you have a question? Please create an issue and we will reach out as soon as possible
| [] | [
"TAGS\n#task_categories-fill-mask #task_ids-masked-language-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Amharic #language-Oromo #language-Lingala #language-Hausa #language-Igbo #language-Kinyarwanda #language-Ganda #language-Luo (Kenya and Tanzania) #language-Nigerian Pidgin #language-Swahili (macrolanguage) #language-Wolof #language-Yoruba #language-Bambara #language-Ghomálá' #language-Ewe #language-Fon #language-Mossi #language-Nyanja #language-Shona #language-Tswana #language-Twi #language-Xhosa #language-Zulu #license-cc-by-4.0 #afrolm #active learning #language modeling #research papers #natural language processing #self-active learning #arxiv-2211.03263 #region-us \n"
] |
e986b088ae469d2ba32caba321dbf911902ec8b7 |
# MLDoc
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/facebookresearch/MLDoc
### Dataset Summary
For document classification, we use the Multilingual Document Classification Corpus (MLDoc) [(Schwenk and Li, 2018)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets.
This dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this [repository](https://github.com/facebookresearch/MLDoc).
### Supported Tasks and Leaderboards
Text Classification
### Languages
The dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese.
## Dataset Structure
### Data Instances
<pre>
MCAT b' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\xc3\xb3 la sesi\xc3\xb3n de corros con baja por la ca\xc3\xadda del viernes en Wall Street y una toma de beneficios. El d\xc3\xb3lar ayudaba a apuntalar al mercado, que pronto podr\xc3\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\xc3\xb1os ocasionados por la huelga de camioneros en Espa\xc3\xb1a. Preussag participaba en un joint venture de exploraci\xc3\xb3n petrol\xc3\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\xc3\xa1ximo de 3.237,69. (c) Reuters Limited 1997. '
</pre>
### Data Fields
- Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets)
- Text
### Data Splits
- train.tsv: 9,458 lines
- valid.tsv: 1,000 lines
- test.tsv: 4,000 lines
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as "Reuters Corpus, Volume 1" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community.
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
#### Initial Data Collection and Normalization
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
#### Who are the source language producers?
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
### Annotations
#### Annotation process
For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf).
#### Who are the annotators?
For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
Access to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
- Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
- Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
For more information about the agreement see [here](https://trec.nist.gov/data/reuters/reuters.html)
### Citation Information
The following paper must be cited when using this corpus:
```
@InProceedings{SCHWENK18.658,
author = {Holger Schwenk and Xian Li},
title = {A Corpus for Multilingual Document Classification in Eight Languages},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {may},
date = {7-12},
location = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {979-10-95546-00-9},
language = {english}
}
@inproceedings{schwenk-li-2018-corpus,
title = "A Corpus for Multilingual Document Classification in Eight Languages",
author = "Schwenk, Holger and
Li, Xian",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1560",
}
```
| PlanTL-GOB-ES/MLDoc | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-28T10:35:05+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["es"], "license": "cc-by-nc-4.0", "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "MLDoc", "tags": []} | 2022-11-03T09:24:03+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #language-Spanish #license-cc-by-nc-4.0 #region-us
|
# MLDoc
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Website: URL
### Dataset Summary
For document classification, we use the Multilingual Document Classification Corpus (MLDoc) (Schwenk and Li, 2018), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets.
This dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this repository.
### Supported Tasks and Leaderboards
Text Classification
### Languages
The dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese.
## Dataset Structure
### Data Instances
<pre>
MCAT b' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\xc3\xb3 la sesi\xc3\xb3n de corros con baja por la ca\xc3\xadda del viernes en Wall Street y una toma de beneficios. El d\xc3\xb3lar ayudaba a apuntalar al mercado, que pronto podr\xc3\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\xc3\xb1os ocasionados por la huelga de camioneros en Espa\xc3\xb1a. Preussag participaba en un joint venture de exploraci\xc3\xb3n petrol\xc3\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\xc3\xa1ximo de 3.237,69. (c) Reuters Limited 1997. '
</pre>
### Data Fields
- Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets)
- Text
### Data Splits
- URL: 9,458 lines
- URL: 1,000 lines
- URL: 4,000 lines
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as "Reuters Corpus, Volume 1" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community.
For more information visit the paper (Lewis et al., 2004).
#### Initial Data Collection and Normalization
For more information visit the paper (Lewis et al., 2004).
#### Who are the source language producers?
For more information visit the paper (Lewis et al., 2004).
### Annotations
#### Annotation process
For more information visit the paper (Schwenk and Li, 2018; Lewis et al., 2004).
#### Who are the annotators?
For more information visit the paper (Schwenk and Li, 2018; Lewis et al., 2004).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
Access to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
- Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
- Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
For more information about the agreement see here
The following paper must be cited when using this corpus:
| [
"# MLDoc",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Website: URL",
"### Dataset Summary\n\nFor document classification, we use the Multilingual Document Classification Corpus (MLDoc) (Schwenk and Li, 2018), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets.\n\nThis dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this repository.",
"### Supported Tasks and Leaderboards\n\nText Classification",
"### Languages\n\nThe dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese.",
"## Dataset Structure",
"### Data Instances\n\n<pre>\nMCAT\tb' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\\xc3\\xb3 la sesi\\xc3\\xb3n de corros con baja por la ca\\xc3\\xadda del viernes en Wall Street y una toma de beneficios. El d\\xc3\\xb3lar ayudaba a apuntalar al mercado, que pronto podr\\xc3\\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\\xc3\\xb1os ocasionados por la huelga de camioneros en Espa\\xc3\\xb1a. Preussag participaba en un joint venture de exploraci\\xc3\\xb3n petrol\\xc3\\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\\xc3\\xa1ximo de 3.237,69. (c) Reuters Limited 1997. '\n</pre>",
"### Data Fields\n\n- Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets)\n- Text",
"### Data Splits\n\n- URL: 9,458 lines\n- URL: 1,000 lines \n- URL: 4,000 lines",
"## Dataset Creation",
"### Curation Rationale\n\n[N/A]",
"### Source Data\n\nThe source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as \"Reuters Corpus, Volume 1\" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community.\n\nFor more information visit the paper (Lewis et al., 2004).",
"#### Initial Data Collection and Normalization\n\nFor more information visit the paper (Lewis et al., 2004).",
"#### Who are the source language producers?\n\nFor more information visit the paper (Lewis et al., 2004).",
"### Annotations",
"#### Annotation process\n\nFor more information visit the paper (Schwenk and Li, 2018; Lewis et al., 2004).",
"#### Who are the annotators?\n\nFor more information visit the paper (Schwenk and Li, 2018; Lewis et al., 2004).",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\n[N/A]",
"### Licensing Information\n\nAccess to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:\n- Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.\n- Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. \n\nFor more information about the agreement see here\n\n\n\nThe following paper must be cited when using this corpus:"
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #language-Spanish #license-cc-by-nc-4.0 #region-us \n",
"# MLDoc",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Website: URL",
"### Dataset Summary\n\nFor document classification, we use the Multilingual Document Classification Corpus (MLDoc) (Schwenk and Li, 2018), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets.\n\nThis dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this repository.",
"### Supported Tasks and Leaderboards\n\nText Classification",
"### Languages\n\nThe dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese.",
"## Dataset Structure",
"### Data Instances\n\n<pre>\nMCAT\tb' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\\xc3\\xb3 la sesi\\xc3\\xb3n de corros con baja por la ca\\xc3\\xadda del viernes en Wall Street y una toma de beneficios. El d\\xc3\\xb3lar ayudaba a apuntalar al mercado, que pronto podr\\xc3\\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\\xc3\\xb1os ocasionados por la huelga de camioneros en Espa\\xc3\\xb1a. Preussag participaba en un joint venture de exploraci\\xc3\\xb3n petrol\\xc3\\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\\xc3\\xa1ximo de 3.237,69. (c) Reuters Limited 1997. '\n</pre>",
"### Data Fields\n\n- Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets)\n- Text",
"### Data Splits\n\n- URL: 9,458 lines\n- URL: 1,000 lines \n- URL: 4,000 lines",
"## Dataset Creation",
"### Curation Rationale\n\n[N/A]",
"### Source Data\n\nThe source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as \"Reuters Corpus, Volume 1\" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community.\n\nFor more information visit the paper (Lewis et al., 2004).",
"#### Initial Data Collection and Normalization\n\nFor more information visit the paper (Lewis et al., 2004).",
"#### Who are the source language producers?\n\nFor more information visit the paper (Lewis et al., 2004).",
"### Annotations",
"#### Annotation process\n\nFor more information visit the paper (Schwenk and Li, 2018; Lewis et al., 2004).",
"#### Who are the annotators?\n\nFor more information visit the paper (Schwenk and Li, 2018; Lewis et al., 2004).",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset contributes to the development of language models in Spanish.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\n[N/A]",
"### Licensing Information\n\nAccess to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:\n- Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.\n- Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. \n\nFor more information about the agreement see here\n\n\n\nThe following paper must be cited when using this corpus:"
] |
9a1c7f132e7b9066c18722c97c7dbf06b85012de | # Dataset Card for "latent_celebA_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_celebA_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` | tglcourse/latent_celebA_256px | [
"region:us"
] | 2022-10-28T10:45:46+00:00 | {"dataset_info": {"features": [{"name": "latent", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 3427164684, "num_examples": 202599}], "download_size": 3338993120, "dataset_size": 3427164684}} | 2022-10-28T10:49:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "latent_celebA_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
| [
"# Dataset Card for \"latent_celebA_256px\"\n\nEach image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"latent_celebA_256px\"\n\nEach image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion\n\nDecoding"
] |
9f23ec8ffc93cae32ae3c203ffa6d6610bbbd6c8 |
# Dataset Card for mt_en_it
## Table of Contents
- [Dataset Card for mt_en_it](#dataset-card-for-mt-en-it)
- [Table of Contents](#table-of-contents)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
### Dataset Summary
This dataset comprises traditional Neapolitan songs from [napoligrafia](https://www.napoligrafia.it) translated into Italian.
### Languages
- italian-to-neapolitan
### Data Instances
A sample from the dataset.
```python
{
'url': "url",
'napoletano': "o, quacche ghiuorno, 'a frennesia mme piglia",
'italiano': "o, qualche giorno, la rabbia mi prende"
}
```
The text is provided without further preprocessing or tokenization.
### Data Fields
- `url`: source URL.
- `napoletano`: Neapolitan text.
- `italiano`: Italian text.
### Dataset Creation
The dataset was created by scraping [napoligrafia](https://www.napoligrafia.it) songs. | efederici/mt_nap_it | [
"task_categories:translation",
"size_categories:unknown",
"language:it",
"license:unknown",
"conditional-text-generation",
"region:us"
] | 2022-10-28T10:51:09+00:00 | {"language": ["it"], "license": ["unknown"], "size_categories": ["unknown"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "mt_nap_it", "tags": ["conditional-text-generation"]} | 2022-10-28T13:32:26+00:00 | [] | [
"it"
] | TAGS
#task_categories-translation #size_categories-unknown #language-Italian #license-unknown #conditional-text-generation #region-us
|
# Dataset Card for mt_en_it
## Table of Contents
- Dataset Card for mt_en_it
- Table of Contents
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
### Dataset Summary
This dataset comprises traditional Neapolitan songs from napoligrafia translated into Italian.
### Languages
- italian-to-neapolitan
### Data Instances
A sample from the dataset.
The text is provided without further preprocessing or tokenization.
### Data Fields
- 'url': source URL.
- 'napoletano': Neapolitan text.
- 'italiano': Italian text.
### Dataset Creation
The dataset was created by scraping napoligrafia songs. | [
"# Dataset Card for mt_en_it",
"## Table of Contents\n\n- Dataset Card for mt_en_it\n - Table of Contents\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Dataset Creation",
"### Dataset Summary\nThis dataset comprises traditional Neapolitan songs from napoligrafia translated into Italian.",
"### Languages\n- italian-to-neapolitan",
"### Data Instances\nA sample from the dataset.\n\nThe text is provided without further preprocessing or tokenization.",
"### Data Fields\n- 'url': source URL.\n- 'napoletano': Neapolitan text.\n- 'italiano': Italian text.",
"### Dataset Creation\nThe dataset was created by scraping napoligrafia songs."
] | [
"TAGS\n#task_categories-translation #size_categories-unknown #language-Italian #license-unknown #conditional-text-generation #region-us \n",
"# Dataset Card for mt_en_it",
"## Table of Contents\n\n- Dataset Card for mt_en_it\n - Table of Contents\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Dataset Creation",
"### Dataset Summary\nThis dataset comprises traditional Neapolitan songs from napoligrafia translated into Italian.",
"### Languages\n- italian-to-neapolitan",
"### Data Instances\nA sample from the dataset.\n\nThe text is provided without further preprocessing or tokenization.",
"### Data Fields\n- 'url': source URL.\n- 'napoletano': Neapolitan text.\n- 'italiano': Italian text.",
"### Dataset Creation\nThe dataset was created by scraping napoligrafia songs."
] |
30b32ca54b7c38130a1bcbf0b5f534904af9971f | <h4> Disclosure </h4>
<p> While its not perfect i hope that you are able to create some nice pictures, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by spectral_wind </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by spectral_wind- 6500</em></li>
<li>10,000 steps <em>Usage: art by spectral_wind-10000</em> </li>
<li>15,000 steps <em>Usage: art by spectral_wind</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/BJNFbAf.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/nKig2lQ.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/ElF2xde.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://i.imgur.com/QSEM4jU.jpg" target="_blank"><img height="100%" width="100%" src="https://i.imgur.com/QSEM4jU.jpg"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> | zZWipeoutZz/spectral_wind | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-28T10:52:24+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-28T13:53:12+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| #### Disclosure
While its not perfect i hope that you are able to create some nice pictures, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know
#### Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
*art by spectral\_wind*
add **[ ]** around it to reduce its weight.
#### Included Files
* 6500 steps *Usage: art by spectral\_wind- 6500*
* 10,000 steps *Usage: art by spectral\_wind-10000*
* 15,000 steps *Usage: art by spectral\_wind*
cheers
Wipeout
#### Example Pictures
#### prompt comparison
[<img height="100%" width="100%" src="https://i.URL
<h4> Licence
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="URL read the full license here</a>](https://i.URL target=) | [
"#### Disclosure\n\n\n While its not perfect i hope that you are able to create some nice pictures, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by spectral\\_wind* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by spectral\\_wind- 6500*\n* 10,000 steps *Usage: art by spectral\\_wind-10000*\n* 15,000 steps *Usage: art by spectral\\_wind*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n[<img height=\"100%\" width=\"100%\" src=\"https://i.URL\n<h4> Licence \nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: \n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\n<a rel=\"noopener nofollow\" href=\"URL read the full license here</a>](https://i.URL target=)"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"#### Disclosure\n\n\n While its not perfect i hope that you are able to create some nice pictures, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by spectral\\_wind* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by spectral\\_wind- 6500*\n* 10,000 steps *Usage: art by spectral\\_wind-10000*\n* 15,000 steps *Usage: art by spectral\\_wind*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n[<img height=\"100%\" width=\"100%\" src=\"https://i.URL\n<h4> Licence \nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: \n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\n<a rel=\"noopener nofollow\" href=\"URL read the full license here</a>](https://i.URL target=)"
] |
422fa1b362f44da776232e5c6d79ef0e9d9d665e | # Media Dataset for IRAN Protests
Following recent protests in Iran corresponding to [__Mahsa Amini__](https://en.wikipedia.org/wiki/Death_of_Mahsa_Amini)'s death, her name has been a trend on social media like Twitter( [#MahsaAmini](https://twitter.com/search?q=%23MahsaAmini) , [#مهسا_امینی](https://twitter.com/search?q=%23%D9%85%D9%87%D8%B3%D8%A7_%D8%A7%D9%85%DB%8C%D9%86%DB%8C)).
Untile Octore 15, 2022, there have been 300+ million tweets on Twitter and among them, there are many posts including media files like images and videos.
It will be helpful for Media Companies, Developers or whoever is interested in reviewing and assessing these files. Our data has been collected since September 14, 2022.
More than __3.1M records__ (including(unique) 2.5M images and 600 thousands videos) is available in current dataset.
### Dataset:
1. created_at: datetime when the tweet posted
2. md_url: URL of the media
3. md_type: show media type (image or video)
4. tw_id: tweet id
## Disclaimer:
The dataset includes any type of media based on what is published by users on Twitter. So, there will be no accusation against the publisher of this dataset.
For more information about dataset and the way that able to download the read media files, please refer to [Github](https://github.com/M-Amrollahi/Iran-protests-media). | MahdiA/Iran-protests-media | [
"license:apache-2.0",
"region:us"
] | 2022-10-28T13:08:37+00:00 | {"license": "apache-2.0"} | 2022-10-28T13:59:06+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| # Media Dataset for IRAN Protests
Following recent protests in Iran corresponding to __Mahsa Amini__'s death, her name has been a trend on social media like Twitter( #MahsaAmini , #مهسا_امینی).
Untile Octore 15, 2022, there have been 300+ million tweets on Twitter and among them, there are many posts including media files like images and videos.
It will be helpful for Media Companies, Developers or whoever is interested in reviewing and assessing these files. Our data has been collected since September 14, 2022.
More than __3.1M records__ (including(unique) 2.5M images and 600 thousands videos) is available in current dataset.
### Dataset:
1. created_at: datetime when the tweet posted
2. md_url: URL of the media
3. md_type: show media type (image or video)
4. tw_id: tweet id
## Disclaimer:
The dataset includes any type of media based on what is published by users on Twitter. So, there will be no accusation against the publisher of this dataset.
For more information about dataset and the way that able to download the read media files, please refer to Github. | [
"# Media Dataset for IRAN Protests\n\nFollowing recent protests in Iran corresponding to __Mahsa Amini__'s death, her name has been a trend on social media like Twitter( #MahsaAmini , #مهسا_امینی).\nUntile Octore 15, 2022, there have been 300+ million tweets on Twitter and among them, there are many posts including media files like images and videos.\n\nIt will be helpful for Media Companies, Developers or whoever is interested in reviewing and assessing these files. Our data has been collected since September 14, 2022.\nMore than __3.1M records__ (including(unique) 2.5M images and 600 thousands videos) is available in current dataset.",
"### Dataset:\n1. created_at: datetime when the tweet posted\n2. md_url: URL of the media\n3. md_type: show media type (image or video)\n4. tw_id: tweet id",
"## Disclaimer:\nThe dataset includes any type of media based on what is published by users on Twitter. So, there will be no accusation against the publisher of this dataset.\n\n\n\nFor more information about dataset and the way that able to download the read media files, please refer to Github."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Media Dataset for IRAN Protests\n\nFollowing recent protests in Iran corresponding to __Mahsa Amini__'s death, her name has been a trend on social media like Twitter( #MahsaAmini , #مهسا_امینی).\nUntile Octore 15, 2022, there have been 300+ million tweets on Twitter and among them, there are many posts including media files like images and videos.\n\nIt will be helpful for Media Companies, Developers or whoever is interested in reviewing and assessing these files. Our data has been collected since September 14, 2022.\nMore than __3.1M records__ (including(unique) 2.5M images and 600 thousands videos) is available in current dataset.",
"### Dataset:\n1. created_at: datetime when the tweet posted\n2. md_url: URL of the media\n3. md_type: show media type (image or video)\n4. tw_id: tweet id",
"## Disclaimer:\nThe dataset includes any type of media based on what is published by users on Twitter. So, there will be no accusation against the publisher of this dataset.\n\n\n\nFor more information about dataset and the way that able to download the read media files, please refer to Github."
] |
91ca4a0e810217bb1ac2e440805ccb3514bf2637 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164990 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T13:22:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T13:26:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
08173c5722c09727379f8ec5f538618236827272 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164991 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T13:22:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T13:28:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
4a74d8864b2f0617d0e7e1e09d6e294c709b339d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164992 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-28T13:22:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v1"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v1", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v1", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-28T13:50:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1\n* Config: mathemakitten--winobias_antistereotype_test_cot_v1\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
7e9dcff44427e84a55e4f4f44223e979ff5eac19 |
# Maccha style embedding
## Samples
<img alt="Samples" src="https://huggingface.co/datasets/DJSoft/maccha_artist_style/resolve/main/samples.jpg" style="max-height: 80vh"/>
<img alt="Comparsion" src="https://huggingface.co/datasets/DJSoft/maccha_artist_style/resolve/main/steps.png" style="max-height: 80vh"/>
## About
Use this Stable Diffusion embedding to achieve style of Matcha_ / maccha_(mochancc) [Pixiv](https://www.pixiv.net/en/users/2583663)
## Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add __art by maccha-*__
Add **( :1.0)** around it to modify its weight
## Included Files
- 8000 steps Usage: **art by maccha-8000**
- 15000 steps Usage: **art by maccha-15000**
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | DJSoft/maccha_artist_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-28T14:06:19+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-27T16:00:22+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
|
# Maccha style embedding
## Samples
<img alt="Samples" src="URL style="max-height: 80vh"/>
<img alt="Comparsion" src="URL style="max-height: 80vh"/>
## About
Use this Stable Diffusion embedding to achieve style of Matcha_ / maccha_(mochancc) Pixiv
## Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add __art by maccha-*__
Add ( :1.0) around it to modify its weight
## Included Files
- 8000 steps Usage: art by maccha-8000
- 15000 steps Usage: art by maccha-15000
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here | [
"# Maccha style embedding",
"## Samples\n\n<img alt=\"Samples\" src=\"URL style=\"max-height: 80vh\"/>\n<img alt=\"Comparsion\" src=\"URL style=\"max-height: 80vh\"/>",
"## About\n\nUse this Stable Diffusion embedding to achieve style of Matcha_ / maccha_(mochancc) Pixiv",
"## Usage\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder \nTo use it in a prompt add __art by maccha-*__ \n\nAdd ( :1.0) around it to modify its weight",
"## Included Files\n- 8000 steps Usage: art by maccha-8000\n- 15000 steps Usage: art by maccha-15000",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"# Maccha style embedding",
"## Samples\n\n<img alt=\"Samples\" src=\"URL style=\"max-height: 80vh\"/>\n<img alt=\"Comparsion\" src=\"URL style=\"max-height: 80vh\"/>",
"## About\n\nUse this Stable Diffusion embedding to achieve style of Matcha_ / maccha_(mochancc) Pixiv",
"## Usage\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder \nTo use it in a prompt add __art by maccha-*__ \n\nAdd ( :1.0) around it to modify its weight",
"## Included Files\n- 8000 steps Usage: art by maccha-8000\n- 15000 steps Usage: art by maccha-15000",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here"
] |
8834a2b0cd1b4c82e9d6fb5c5ba80d9c2c916a13 |
# Yuki Miku 2017 embedding
## Samples
<img alt="Samples" src="https://huggingface.co/datasets/DJSoft/yuki_miku_2017_outfit/resolve/main/samples.jpg" style="max-height: 80vh"/>
<img alt="Comparsion" src="https://huggingface.co/datasets/DJSoft/yuki_miku_2017_outfit/resolve/main/steps.png" style="max-height: 80vh"/>
## About
Use this Stable Diffusion embedding to achieve the Hatsune Miku Yuki Style 2017 outfit
## Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add __yuki_miku_2017-*__
Add **( :1.0)** around it to modify its weight
## Included Files
- 8000 steps Usage: **yuki_miku_2017-8000**
- 10000 steps Usage: **yuki_miku_2017-10000**
- 15000 steps Usage: **yuki_miku_2017-15000**
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | DJSoft/yuki_miku_2017_outfit | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-28T14:43:14+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-27T15:43:43+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
|
# Yuki Miku 2017 embedding
## Samples
<img alt="Samples" src="URL style="max-height: 80vh"/>
<img alt="Comparsion" src="URL style="max-height: 80vh"/>
## About
Use this Stable Diffusion embedding to achieve the Hatsune Miku Yuki Style 2017 outfit
## Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add __yuki_miku_2017-*__
Add ( :1.0) around it to modify its weight
## Included Files
- 8000 steps Usage: yuki_miku_2017-8000
- 10000 steps Usage: yuki_miku_2017-10000
- 15000 steps Usage: yuki_miku_2017-15000
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here | [
"# Yuki Miku 2017 embedding",
"## Samples\n\n<img alt=\"Samples\" src=\"URL style=\"max-height: 80vh\"/>\n<img alt=\"Comparsion\" src=\"URL style=\"max-height: 80vh\"/>",
"## About\n\nUse this Stable Diffusion embedding to achieve the Hatsune Miku Yuki Style 2017 outfit",
"## Usage\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder \nTo use it in a prompt add __yuki_miku_2017-*__ \n\nAdd ( :1.0) around it to modify its weight",
"## Included Files\n- 8000 steps Usage: yuki_miku_2017-8000\n- 10000 steps Usage: yuki_miku_2017-10000\n- 15000 steps Usage: yuki_miku_2017-15000",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"# Yuki Miku 2017 embedding",
"## Samples\n\n<img alt=\"Samples\" src=\"URL style=\"max-height: 80vh\"/>\n<img alt=\"Comparsion\" src=\"URL style=\"max-height: 80vh\"/>",
"## About\n\nUse this Stable Diffusion embedding to achieve the Hatsune Miku Yuki Style 2017 outfit",
"## Usage\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder \nTo use it in a prompt add __yuki_miku_2017-*__ \n\nAdd ( :1.0) around it to modify its weight",
"## Included Files\n- 8000 steps Usage: yuki_miku_2017-8000\n- 10000 steps Usage: yuki_miku_2017-10000\n- 15000 steps Usage: yuki_miku_2017-15000",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here"
] |
115a522e89601c99a3ee2b4f9622b8df0a19639f | # Dataset Card for "focus_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kanak8278/focus_test | [
"region:us"
] | 2022-10-28T17:42:49+00:00 | {"dataset_info": {"features": [{"name": "dialogID", "dtype": "string"}, {"name": "utterance", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "hit_knowledge", "dtype": "string"}, {"name": "ground_knowledge", "dtype": "string"}, {"name": "ground_persona", "dtype": "string"}, {"name": "similarity_score", "dtype": "float64"}, {"name": "persona1", "dtype": "string"}, {"name": "persona2", "dtype": "string"}, {"name": "persona3", "dtype": "string"}, {"name": "persona4", "dtype": "string"}, {"name": "persona5", "dtype": "string"}, {"name": "persona_grounding1", "dtype": "bool"}, {"name": "persona_grounding2", "dtype": "bool"}, {"name": "persona_grounding3", "dtype": "bool"}, {"name": "persona_grounding4", "dtype": "bool"}, {"name": "persona_grounding5", "dtype": "bool"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6713468, "num_examples": 9035}], "download_size": 2783764, "dataset_size": 6713468}} | 2022-10-28T17:42:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "focus_test"
More Information needed | [
"# Dataset Card for \"focus_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"focus_test\"\n\nMore Information needed"
] |
84f973e948620e38b0c7e9fa880c20ab0eeede0a |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | severo/glue | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"qa-nli",
"coreference-nli",
"paraphrase-identification",
"region:us"
] | 2022-10-28T20:00:14+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2022-10-28T15:35:04+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #qa-nli #coreference-nli #paraphrase-identification #region-us
| Dataset Card for GLUE
=====================
Table of Contents
-----------------
* Dataset Card for GLUE
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Languages
+ Dataset Structure
- Data Instances
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Fields
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Splits
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Point of Contact:
* Size of downloaded dataset files: 955.33 MB
* Size of the generated dataset: 229.68 MB
* Total amount of disk used: 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli\_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli\_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 'en')
Dataset Structure
-----------------
### Data Instances
#### ax
* Size of downloaded dataset files: 0.21 MB
* Size of the generated dataset: 0.23 MB
* Total amount of disk used: 0.44 MB
An example of 'test' looks as follows.
#### cola
* Size of downloaded dataset files: 0.36 MB
* Size of the generated dataset: 0.58 MB
* Total amount of disk used: 0.94 MB
An example of 'train' looks as follows.
#### mnli
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 78.65 MB
* Total amount of disk used: 376.95 MB
An example of 'train' looks as follows.
#### mnli\_matched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.52 MB
* Total amount of disk used: 301.82 MB
An example of 'test' looks as follows.
#### mnli\_mismatched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.73 MB
* Total amount of disk used: 302.02 MB
An example of 'test' looks as follows.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Fields
The data fields are the same among all splits.
#### ax
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### cola
* 'sentence': a 'string' feature.
* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).
* 'idx': a 'int32' feature.
#### mnli
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_matched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_mismatched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Splits
#### ax
#### cola
#### mnli
#### mnli\_matched
#### mnli\_mismatched
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset.
| [
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #qa-nli #coreference-nli #paraphrase-identification #region-us \n",
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
] |
8c8403a9c0cb6a7c50d305d661bb06f8f1eac2d5 | # Dataset Card for "Romance-cleaned-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Romance-cleaned-3 | [
"region:us"
] | 2022-10-29T05:03:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3369959.5092553934, "num_examples": 6466}, {"name": "validation", "num_bytes": 374729.4907446068, "num_examples": 719}], "download_size": 2300275, "dataset_size": 3744689.0}} | 2022-10-29T05:03:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Romance-cleaned-3"
More Information needed | [
"# Dataset Card for \"Romance-cleaned-3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Romance-cleaned-3\"\n\nMore Information needed"
] |
8b2557a673e0e0d687c1484a7e197d3f8c43c699 |
# Dataset Card for Pokémon BLIP captions with English and Japanese.
Dataset used to train Pokémon text to image model, add a Japanese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Japanese captions are translated by [Deepl](https://www.deepl.com/translator) | svjack/pokemon-blip-captions-en-ja | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"language:ja",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-29T06:26:57+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en", "ja"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["huggan/few-shot-pokemon"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Pok\u00e9mon BLIP captions", "tags": []} | 2022-10-31T06:22:04+00:00 | [] | [
"en",
"ja"
] | TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-multilingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #language-Japanese #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for Pokémon BLIP captions with English and Japanese.
Dataset used to train Pokémon text to image model, add a Japanese Column of Pokémon BLIP captions
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Japanese captions are translated by Deepl | [
"# Dataset Card for Pokémon BLIP captions with English and Japanese.\n\nDataset used to train Pokémon text to image model, add a Japanese Column of Pokémon BLIP captions\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.\n\nThe Japanese captions are translated by Deepl"
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-multilingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #language-Japanese #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for Pokémon BLIP captions with English and Japanese.\n\nDataset used to train Pokémon text to image model, add a Japanese Column of Pokémon BLIP captions\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.\n\nThe Japanese captions are translated by Deepl"
] |
473ce373f77f53101b124af68bc5d81ef8f8ef48 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | duyngtr16061999/fashion_text_to_image | [
"region:us"
] | 2022-10-29T07:50:41+00:00 | {} | 2022-11-21T05:54:22+00:00 | [] | [] | TAGS
#region-us
| ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: "Fashion captions"
size_categories:
- n<100K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
7f5566dbfedcb5db78e493a0bdf04b410ec769fe |
# Sam Yang Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by sam_yang"```
If it is to strong just add [] around it.
Trained until 5000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/cbtBjwH.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/r5s8bSO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NpGj5KU.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/eWJlaf5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/DOJvxTJ.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/sam_yang | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-29T10:24:38+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T10:26:45+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Sam Yang Artist Embedding / Textual Inversion
=============================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 5000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
9cf098c4cfce7ab970110f983f16773087f13830 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jomon07](https://huggingface.co/Jomon07) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-98a820-1924665124 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-29T13:24:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "patrickvonplaten/bert2bert_cnn_daily_mail", "metrics": ["accuracy", "bleu"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-10-29T14:11:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Jomon07 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: patrickvonplaten/bert2bert_cnn_daily_mail\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Jomon07 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: patrickvonplaten/bert2bert_cnn_daily_mail\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Jomon07 for evaluating this model."
] |
55c6aa8cf6594b07167e47488ae303b84f4daf38 | <h4> Disclosure </h4>
<p> I hope that you are able to create some nice pictures,, if you have any embedding suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by insane_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by insane_style-6500</em></li>
<li>10,000 steps <em>Usage: art by insane_style-10000</em> </li>
<li>15,000 steps <em>Usage: art by insane_style</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/YGROrC5.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/IFQRJcH.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/FwfXft0.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/SEkzaVr.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/SEkzaVr.jpg"></a>
| zZWipeoutZz/insane_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-29T15:14:08+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-29T15:31:20+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| #### Disclosure
I hope that you are able to create some nice pictures,, if you have any embedding suggestions or issues please let me know
#### Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
*art by insane\_style*
add **[ ]** around it to reduce its weight.
#### Included Files
* 6500 steps *Usage: art by insane\_style-6500*
* 10,000 steps *Usage: art by insane\_style-10000*
* 15,000 steps *Usage: art by insane\_style*
cheers
Wipeout
#### Example Pictures
#### prompt comparison
*click the image to enlarge*
[<img height="50%" width="50%" src="https://i.URL](https://i.URL target=) | [
"#### Disclosure\n\n\n I hope that you are able to create some nice pictures,, if you have any embedding suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by insane\\_style* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by insane\\_style-6500*\n* 10,000 steps *Usage: art by insane\\_style-10000*\n* 15,000 steps *Usage: art by insane\\_style*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n *click the image to enlarge*\n[<img height=\"50%\" width=\"50%\" src=\"https://i.URL](https://i.URL target=)"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"#### Disclosure\n\n\n I hope that you are able to create some nice pictures,, if you have any embedding suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by insane\\_style* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by insane\\_style-6500*\n* 10,000 steps *Usage: art by insane\\_style-10000*\n* 15,000 steps *Usage: art by insane\\_style*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n *click the image to enlarge*\n[<img height=\"50%\" width=\"50%\" src=\"https://i.URL](https://i.URL target=)"
] |
22ffd55109e12e1b82003a93e40fee0298e985a3 |
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` | zoheb/sketch-scene | [
"task_categories:text-to-image",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<10K",
"source_datasets:FS-COCO",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-29T17:15:58+00:00 | {"language_creators": ["machine-generated"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<10K"], "source_datasets": ["FS-COCO"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Sketch Scene Descriptions", "tags": []} | 2022-10-30T10:07:48+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<10K #source_datasets-FS-COCO #language-English #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.
If you use this dataset, please cite it as:
| [
"# Dataset Card for Sketch Scene Descriptions\n\n_Dataset used to train [Sketch Scene text to image model]()_\n\nWe advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.\n\nFor each row, the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.\n\n\nIf you use this dataset, please cite it as:"
] | [
"TAGS\n#task_categories-text-to-image #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<10K #source_datasets-FS-COCO #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for Sketch Scene Descriptions\n\n_Dataset used to train [Sketch Scene text to image model]()_\n\nWe advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.\n\nFor each row, the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.\n\n\nIf you use this dataset, please cite it as:"
] |
5c0abe70104c7e699d1834afd39232def41b0f77 | # Dataset Card for "turkishReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/turkishReviews-ds-mini | [
"region:us"
] | 2022-10-29T17:16:42+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1296087.3, "num_examples": 3600}, {"name": "validation", "num_bytes": 144009.7, "num_examples": 400}], "download_size": 915922, "dataset_size": 1440097.0}} | 2022-11-07T10:03:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "turkishReviews-ds-mini"
More Information needed | [
"# Dataset Card for \"turkishReviews-ds-mini\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"turkishReviews-ds-mini\"\n\nMore Information needed"
] |
13070b62b99dbd27502b95ef9980f2a34d32f691 | Pictures of ME! | stwhiteisme/Stwhiteisme | [
"region:us"
] | 2022-10-29T17:18:42+00:00 | {} | 2022-10-29T17:19:22+00:00 | [] | [] | TAGS
#region-us
| Pictures of ME! | [] | [
"TAGS\n#region-us \n"
] |
0c88dc959fd721314f8ad736a96057cf1665e852 | # AutoTrain Dataset for project: oaoqoqkaksk
## Dataset Description
This dataset has been automatically processed by AutoTrain for project oaoqoqkaksk.
### Languages
The BCP-47 code for the dataset's language is en2nl.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": "\u00de\u00e6t Sunnanrastere onl\u00edcnescynn",
"source": "The Sun raster image format"
},
{
"target": "Lundon",
"source": "Gordon"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"source": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1528 |
| valid | 383 |
| Tritkoman/ENtoANGGNOME | [
"task_categories:translation",
"language:en",
"language:nl",
"region:us"
] | 2022-10-29T17:30:11+00:00 | {"language": ["en", "nl"], "task_categories": ["translation"]} | 2022-10-29T17:45:18+00:00 | [] | [
"en",
"nl"
] | TAGS
#task_categories-translation #language-English #language-Dutch #region-us
| AutoTrain Dataset for project: oaoqoqkaksk
==========================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project oaoqoqkaksk.
### Languages
The BCP-47 code for the dataset's language is en2nl.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en2nl.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-translation #language-English #language-Dutch #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en2nl.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
d00918e71905f1a4f4696d0e61a979cfe8ccee01 | Dfggggvvhg | Zxol/Dfv | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | 2022-10-29T18:46:09+00:00 | {"license": "bigscience-bloom-rail-1.0"} | 2022-10-29T18:46:54+00:00 | [] | [] | TAGS
#license-bigscience-bloom-rail-1.0 #region-us
| Dfggggvvhg | [] | [
"TAGS\n#license-bigscience-bloom-rail-1.0 #region-us \n"
] |
b4b871e5d5f20e77218d34aabfd7e09f782fedd0 |
# Dataset Description
## Structure
- Consists of 5 fields
- Each row corresponds to a policy - sequence of actions, given an initial `<START>` state, and corresponding rewards at each step.
## Fields
`steps`, `step_attn_masks`, `rewards`, `actions`, `dones`
## Field descriptions
- `steps` (List of lists of `Int`s) - tokenized step tokens of all the steps in the policy sequence (here we use the `roberta-base` tokenizer, as `roberta-base` would be used to encode each step of a recipe)
- `step_attn_masks` (List of lists of `Int`s) - Attention masks corresponding to `steps`
- `rewards` (List of `Float`s) - Sequence of rewards (normalized b/w 0 and 1) assigned per step.
- `actions` (List of lists of `Int`s) - Sequence of actions (one-hot encoded, as the action space is discrete). There are `33` different actions possible (we consider the maximum number of steps per recipe = `16`, so the action can vary from `-16` to `+16`; The class label is got by adding 16 to the actual action value)
- `dones` (List of `Bool`) - Sequence of flags, conveying if the work is completed when that step is reached, or not.
## Dataset Size
- Number of rows = `2255673`
- Maximum number of steps per row = `16` | AnonymousSub/recipe_RL_data_roberta-base | [
"multilinguality:monolingual",
"language:en",
"region:us"
] | 2022-10-29T20:16:35+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "recipe RL roberta base", "tags": []} | 2022-11-03T15:38:06+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #region-us
|
# Dataset Description
## Structure
- Consists of 5 fields
- Each row corresponds to a policy - sequence of actions, given an initial '<START>' state, and corresponding rewards at each step.
## Fields
'steps', 'step_attn_masks', 'rewards', 'actions', 'dones'
## Field descriptions
- 'steps' (List of lists of 'Int's) - tokenized step tokens of all the steps in the policy sequence (here we use the 'roberta-base' tokenizer, as 'roberta-base' would be used to encode each step of a recipe)
- 'step_attn_masks' (List of lists of 'Int's) - Attention masks corresponding to 'steps'
- 'rewards' (List of 'Float's) - Sequence of rewards (normalized b/w 0 and 1) assigned per step.
- 'actions' (List of lists of 'Int's) - Sequence of actions (one-hot encoded, as the action space is discrete). There are '33' different actions possible (we consider the maximum number of steps per recipe = '16', so the action can vary from '-16' to '+16'; The class label is got by adding 16 to the actual action value)
- 'dones' (List of 'Bool') - Sequence of flags, conveying if the work is completed when that step is reached, or not.
## Dataset Size
- Number of rows = '2255673'
- Maximum number of steps per row = '16' | [
"# Dataset Description",
"## Structure\n\n- Consists of 5 fields\n- Each row corresponds to a policy - sequence of actions, given an initial '<START>' state, and corresponding rewards at each step.",
"## Fields\n\n'steps', 'step_attn_masks', 'rewards', 'actions', 'dones'",
"## Field descriptions\n\n- 'steps' (List of lists of 'Int's) - tokenized step tokens of all the steps in the policy sequence (here we use the 'roberta-base' tokenizer, as 'roberta-base' would be used to encode each step of a recipe)\n- 'step_attn_masks' (List of lists of 'Int's) - Attention masks corresponding to 'steps'\n- 'rewards' (List of 'Float's) - Sequence of rewards (normalized b/w 0 and 1) assigned per step.\n- 'actions' (List of lists of 'Int's) - Sequence of actions (one-hot encoded, as the action space is discrete). There are '33' different actions possible (we consider the maximum number of steps per recipe = '16', so the action can vary from '-16' to '+16'; The class label is got by adding 16 to the actual action value)\n- 'dones' (List of 'Bool') - Sequence of flags, conveying if the work is completed when that step is reached, or not.",
"## Dataset Size\n\n- Number of rows = '2255673'\n- Maximum number of steps per row = '16'"
] | [
"TAGS\n#multilinguality-monolingual #language-English #region-us \n",
"# Dataset Description",
"## Structure\n\n- Consists of 5 fields\n- Each row corresponds to a policy - sequence of actions, given an initial '<START>' state, and corresponding rewards at each step.",
"## Fields\n\n'steps', 'step_attn_masks', 'rewards', 'actions', 'dones'",
"## Field descriptions\n\n- 'steps' (List of lists of 'Int's) - tokenized step tokens of all the steps in the policy sequence (here we use the 'roberta-base' tokenizer, as 'roberta-base' would be used to encode each step of a recipe)\n- 'step_attn_masks' (List of lists of 'Int's) - Attention masks corresponding to 'steps'\n- 'rewards' (List of 'Float's) - Sequence of rewards (normalized b/w 0 and 1) assigned per step.\n- 'actions' (List of lists of 'Int's) - Sequence of actions (one-hot encoded, as the action space is discrete). There are '33' different actions possible (we consider the maximum number of steps per recipe = '16', so the action can vary from '-16' to '+16'; The class label is got by adding 16 to the actual action value)\n- 'dones' (List of 'Bool') - Sequence of flags, conveying if the work is completed when that step is reached, or not.",
"## Dataset Size\n\n- Number of rows = '2255673'\n- Maximum number of steps per row = '16'"
] |
d98f91761614aa984340c6ce99a333e4b2cd21b6 |
# Chibi Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by chibi_style"```
Use (Chibi) tag beside the Embedding for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/rXHJyFQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/eocJJXg.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/8dA3EUO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/mmChRb3.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/sooxpE5.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/chibi_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-29T20:44:17+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T20:50:26+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Chibi Style Embedding / Textual Inversion
=========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
Use (Chibi) tag beside the Embedding for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
008edafee29d0b086ea59c8b94a83fb12cb1aa00 |
# Dataset Card for S&P 500 Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
This Dataset was created by combining the daily close prices for each stock in the current (as of 10/29/2022) S&P 500 index dating back to January 1, 1970. This data came from the Kaggle dataset (https://www.kaggle.com/datasets/paultimothymooney/stock-market-data) and was aggregated using PANDAS before being converted to a HuggingFace Dataset.
### Dataset Summary
This dataset has 407 columns specifying dates and associated close prices of the stocks in the S&P 500 that had data which could be accessed from the above Kaggle dataset. 94 stocks are missing due to issues loading that stock data into the dataset (i.e. stock name changes (like FB to META)). These items will need further review. There are many NA values due to stocks that were not in existence as early as 1970.
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
No split has currently been created for the dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
https://www.kaggle.com/datasets/paultimothymooney/stock-market-data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/nick-carroll1) for adding this dataset.
---
dataset_info:
features:
- name: MMM
dtype: float64
- name: AOS
dtype: float64
- name: ABT
dtype: float64
- name: ABBV
dtype: float64
- name: ABMD
dtype: float64
- name: ACN
dtype: float64
- name: ATVI
dtype: float64
- name: ADM
dtype: float64
- name: ADBE
dtype: float64
- name: ADP
dtype: float64
- name: AAP
dtype: float64
- name: A
dtype: float64
- name: APD
dtype: float64
- name: AKAM
dtype: float64
- name: ALK
dtype: float64
- name: ALB
dtype: float64
- name: ARE
dtype: float64
- name: ALGN
dtype: float64
- name: ALLE
dtype: float64
- name: LNT
dtype: float64
- name: GOOG
dtype: float64
- name: MO
dtype: float64
- name: AMZN
dtype: float64
- name: AMD
dtype: float64
- name: AEE
dtype: float64
- name: AAL
dtype: float64
- name: AEP
dtype: float64
- name: AXP
dtype: float64
- name: AIG
dtype: float64
- name: AMT
dtype: float64
- name: AWK
dtype: float64
- name: AMP
dtype: float64
- name: ABC
dtype: float64
- name: AME
dtype: float64
- name: AMGN
dtype: float64
- name: APH
dtype: float64
- name: ADI
dtype: float64
- name: AON
dtype: float64
- name: APA
dtype: float64
- name: AAPL
dtype: float64
- name: AMAT
dtype: float64
- name: ANET
dtype: float64
- name: AJG
dtype: float64
- name: AIZ
dtype: float64
- name: T
dtype: float64
- name: ATO
dtype: float64
- name: ADSK
dtype: float64
- name: AZO
dtype: float64
- name: AVB
dtype: float64
- name: AVY
dtype: float64
- name: BAC
dtype: float64
- name: BAX
dtype: float64
- name: BDX
dtype: float64
- name: WRB
dtype: float64
- name: BBY
dtype: float64
- name: BIO
dtype: float64
- name: BIIB
dtype: float64
- name: BLK
dtype: float64
- name: BK
dtype: float64
- name: BA
dtype: float64
- name: BWA
dtype: float64
- name: BXP
dtype: float64
- name: BSX
dtype: float64
- name: BMY
dtype: float64
- name: AVGO
dtype: float64
- name: BR
dtype: float64
- name: BRO
dtype: float64
- name: CHRW
dtype: float64
- name: CDNS
dtype: float64
- name: CZR
dtype: float64
- name: CPT
dtype: float64
- name: CPB
dtype: float64
- name: COF
dtype: float64
- name: CAH
dtype: float64
- name: KMX
dtype: float64
- name: CAT
dtype: float64
- name: CBOE
dtype: float64
- name: CDW
dtype: float64
- name: CNC
dtype: float64
- name: CNP
dtype: float64
- name: CF
dtype: float64
- name: CRL
dtype: float64
- name: SCHW
dtype: float64
- name: CHTR
dtype: float64
- name: CMG
dtype: float64
- name: CB
dtype: float64
- name: CHD
dtype: float64
- name: CINF
dtype: float64
- name: CTAS
dtype: float64
- name: CSCO
dtype: float64
- name: C
dtype: float64
- name: CFG
dtype: float64
- name: CLX
dtype: float64
- name: CME
dtype: float64
- name: CMS
dtype: float64
- name: KO
dtype: float64
- name: CTSH
dtype: float64
- name: CL
dtype: float64
- name: CMCSA
dtype: float64
- name: CAG
dtype: float64
- name: COP
dtype: float64
- name: ED
dtype: float64
- name: COO
dtype: float64
- name: CPRT
dtype: float64
- name: GLW
dtype: float64
- name: CSGP
dtype: float64
- name: COST
dtype: float64
- name: CCI
dtype: float64
- name: CMI
dtype: float64
- name: DHI
dtype: float64
- name: DRI
dtype: float64
- name: DVA
dtype: float64
- name: DE
dtype: float64
- name: DAL
dtype: float64
- name: DVN
dtype: float64
- name: DXCM
dtype: float64
- name: FANG
dtype: float64
- name: DLR
dtype: float64
- name: DFS
dtype: float64
- name: DISH
dtype: float64
- name: DIS
dtype: float64
- name: DG
dtype: float64
- name: DLTR
dtype: float64
- name: D
dtype: float64
- name: DPZ
dtype: float64
- name: DOV
dtype: float64
- name: DOW
dtype: float64
- name: DTE
dtype: float64
- name: DD
dtype: float64
- name: EMN
dtype: float64
- name: ETN
dtype: float64
- name: EBAY
dtype: float64
- name: ECL
dtype: float64
- name: EIX
dtype: float64
- name: EW
dtype: float64
- name: EA
dtype: float64
- name: LLY
dtype: float64
- name: EMR
dtype: float64
- name: ENPH
dtype: float64
- name: EOG
dtype: float64
- name: EPAM
dtype: float64
- name: EFX
dtype: float64
- name: EQIX
dtype: float64
- name: EQR
dtype: float64
- name: ESS
dtype: float64
- name: EL
dtype: float64
- name: RE
dtype: float64
- name: ES
dtype: float64
- name: EXC
dtype: float64
- name: EXPE
dtype: float64
- name: EXPD
dtype: float64
- name: EXR
dtype: float64
- name: XOM
dtype: float64
- name: FFIV
dtype: float64
- name: FDS
dtype: float64
- name: FAST
dtype: float64
- name: FRT
dtype: float64
- name: FDX
dtype: float64
- name: FITB
dtype: float64
- name: FRC
dtype: float64
- name: FE
dtype: float64
- name: FIS
dtype: float64
- name: FISV
dtype: float64
- name: FLT
dtype: float64
- name: FMC
dtype: float64
- name: F
dtype: float64
- name: FTNT
dtype: float64
- name: FBHS
dtype: float64
- name: FOXA
dtype: float64
- name: BEN
dtype: float64
- name: FCX
dtype: float64
- name: GRMN
dtype: float64
- name: IT
dtype: float64
- name: GNRC
dtype: float64
- name: GD
dtype: float64
- name: GE
dtype: float64
- name: GIS
dtype: float64
- name: GM
dtype: float64
- name: GPC
dtype: float64
- name: GILD
dtype: float64
- name: GPN
dtype: float64
- name: HAL
dtype: float64
- name: HIG
dtype: float64
- name: HAS
dtype: float64
- name: HCA
dtype: float64
- name: HSIC
dtype: float64
- name: HSY
dtype: float64
- name: HES
dtype: float64
- name: HPE
dtype: float64
- name: HLT
dtype: float64
- name: HOLX
dtype: float64
- name: HD
dtype: float64
- name: HON
dtype: float64
- name: HRL
dtype: float64
- name: HST
dtype: float64
- name: HPQ
dtype: float64
- name: HUM
dtype: float64
- name: HBAN
dtype: float64
- name: HII
dtype: float64
- name: IBM
dtype: float64
- name: IEX
dtype: float64
- name: IDXX
dtype: float64
- name: ITW
dtype: float64
- name: ILMN
dtype: float64
- name: INCY
dtype: float64
- name: IR
dtype: float64
- name: INTC
dtype: float64
- name: ICE
dtype: float64
- name: IP
dtype: float64
- name: IPG
dtype: float64
- name: IFF
dtype: float64
- name: INTU
dtype: float64
- name: ISRG
dtype: float64
- name: IVZ
dtype: float64
- name: IRM
dtype: float64
- name: JBHT
dtype: float64
- name: JKHY
dtype: float64
- name: JNJ
dtype: float64
- name: JCI
dtype: float64
- name: JPM
dtype: float64
- name: JNPR
dtype: float64
- name: K
dtype: float64
- name: KEY
dtype: float64
- name: KEYS
dtype: float64
- name: KMB
dtype: float64
- name: KIM
dtype: float64
- name: KLAC
dtype: float64
- name: KHC
dtype: float64
- name: KR
dtype: float64
- name: LH
dtype: float64
- name: LRCX
dtype: float64
- name: LVS
dtype: float64
- name: LDOS
dtype: float64
- name: LNC
dtype: float64
- name: LYV
dtype: float64
- name: LKQ
dtype: float64
- name: LMT
dtype: float64
- name: LOW
dtype: float64
- name: LYB
dtype: float64
- name: MRO
dtype: float64
- name: MPC
dtype: float64
- name: MKTX
dtype: float64
- name: MAR
dtype: float64
- name: MMC
dtype: float64
- name: MLM
dtype: float64
- name: MA
dtype: float64
- name: MKC
dtype: float64
- name: MCD
dtype: float64
- name: MCK
dtype: float64
- name: MDT
dtype: float64
- name: MRK
dtype: float64
- name: MET
dtype: float64
- name: MTD
dtype: float64
- name: MGM
dtype: float64
- name: MCHP
dtype: float64
- name: MU
dtype: float64
- name: MSFT
dtype: float64
- name: MAA
dtype: float64
- name: MHK
dtype: float64
- name: MOH
dtype: float64
- name: TAP
dtype: float64
- name: MDLZ
dtype: float64
- name: MPWR
dtype: float64
- name: MNST
dtype: float64
- name: MCO
dtype: float64
- name: MOS
dtype: float64
- name: MSI
dtype: float64
- name: MSCI
dtype: float64
- name: NDAQ
dtype: float64
- name: NTAP
dtype: float64
- name: NFLX
dtype: float64
- name: NWL
dtype: float64
- name: NEM
dtype: float64
- name: NWSA
dtype: float64
- name: NEE
dtype: float64
- name: NI
dtype: float64
- name: NDSN
dtype: float64
- name: NSC
dtype: float64
- name: NTRS
dtype: float64
- name: NOC
dtype: float64
- name: NCLH
dtype: float64
- name: NRG
dtype: float64
- name: NVDA
dtype: float64
- name: NVR
dtype: float64
- name: NXPI
dtype: float64
- name: ORLY
dtype: float64
- name: OXY
dtype: float64
- name: ODFL
dtype: float64
- name: OMC
dtype: float64
- name: OKE
dtype: float64
- name: PCAR
dtype: float64
- name: PKG
dtype: float64
- name: PH
dtype: float64
- name: PAYX
dtype: float64
- name: PAYC
dtype: float64
- name: PNR
dtype: float64
- name: PEP
dtype: float64
- name: PKI
dtype: float64
- name: PFE
dtype: float64
- name: PM
dtype: float64
- name: PSX
dtype: float64
- name: PNW
dtype: float64
- name: PXD
dtype: float64
- name: PNC
dtype: float64
- name: POOL
dtype: float64
- name: PPG
dtype: float64
- name: PFG
dtype: float64
- name: PG
dtype: float64
- name: PLD
dtype: float64
- name: PRU
dtype: float64
- name: PEG
dtype: float64
- name: PTC
dtype: float64
- name: PHM
dtype: float64
- name: QRVO
dtype: float64
- name: PWR
dtype: float64
- name: QCOM
dtype: float64
- name: DGX
dtype: float64
- name: RL
dtype: float64
- name: RJF
dtype: float64
- name: O
dtype: float64
- name: REG
dtype: float64
- name: REGN
dtype: float64
- name: RF
dtype: float64
- name: RSG
dtype: float64
- name: RMD
dtype: float64
- name: RHI
dtype: float64
- name: ROK
dtype: float64
- name: ROL
dtype: float64
- name: ROP
dtype: float64
- name: ROST
dtype: float64
- name: RCL
dtype: float64
- name: CRM
dtype: float64
- name: SBAC
dtype: float64
- name: SLB
dtype: float64
- name: STX
dtype: float64
- name: SEE
dtype: float64
- name: SRE
dtype: float64
- name: NOW
dtype: float64
- name: SHW
dtype: float64
- name: SBNY
dtype: float64
- name: SPG
dtype: float64
- name: SWKS
dtype: float64
- name: SO
dtype: float64
- name: LUV
dtype: float64
- name: SWK
dtype: float64
- name: SBUX
dtype: float64
- name: STT
dtype: float64
- name: SYK
dtype: float64
- name: SIVB
dtype: float64
- name: SYF
dtype: float64
- name: SNPS
dtype: float64
- name: TMUS
dtype: float64
- name: TROW
dtype: float64
- name: TTWO
dtype: float64
- name: TRGP
dtype: float64
- name: TEL
dtype: float64
- name: TDY
dtype: float64
- name: TSLA
dtype: float64
- name: TXN
dtype: float64
- name: TXT
dtype: float64
- name: TMO
dtype: float64
- name: TJX
dtype: float64
- name: TSCO
dtype: float64
- name: TDG
dtype: float64
- name: TRV
dtype: float64
- name: TYL
dtype: float64
- name: TSN
dtype: float64
- name: USB
dtype: float64
- name: UDR
dtype: float64
- name: ULTA
dtype: float64
- name: UNP
dtype: float64
- name: UAL
dtype: float64
- name: UPS
dtype: float64
- name: URI
dtype: float64
- name: UNH
dtype: float64
- name: UHS
dtype: float64
- name: VTR
dtype: float64
- name: VRSN
dtype: float64
- name: VRSK
dtype: float64
- name: VZ
dtype: float64
- name: VRTX
dtype: float64
- name: VFC
dtype: float64
- name: V
dtype: float64
- name: VMC
dtype: float64
- name: WAB
dtype: float64
- name: WBA
dtype: float64
- name: WMT
dtype: float64
- name: WM
dtype: float64
- name: WAT
dtype: float64
- name: WEC
dtype: float64
- name: WFC
dtype: float64
- name: WST
dtype: float64
- name: WDC
dtype: float64
- name: WRK
dtype: float64
- name: WY
dtype: float64
- name: WHR
dtype: float64
- name: WMB
dtype: float64
- name: WTW
dtype: float64
- name: GWW
dtype: float64
- name: WYNN
dtype: float64
- name: XEL
dtype: float64
- name: XYL
dtype: float64
- name: YUM
dtype: float64
- name: ZBRA
dtype: float64
- name: ZBH
dtype: float64
- name: ZION
dtype: float64
- name: ZTS
dtype: float64
- name: Date
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 44121086
num_examples: 13322
download_size: 0
dataset_size: 44121086
---
# Dataset Card for "sp500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nick-carroll1/sp500 | [
"region:us"
] | 2022-10-29T22:20:49+00:00 | {} | 2022-10-29T23:08:46+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for S&P 500 Dataset
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
This Dataset was created by combining the daily close prices for each stock in the current (as of 10/29/2022) S&P 500 index dating back to January 1, 1970. This data came from the Kaggle dataset (URL and was aggregated using PANDAS before being converted to a HuggingFace Dataset.
### Dataset Summary
This dataset has 407 columns specifying dates and associated close prices of the stocks in the S&P 500 that had data which could be accessed from the above Kaggle dataset. 94 stocks are missing due to issues loading that stock data into the dataset (i.e. stock name changes (like FB to META)). These items will need further review. There are many NA values due to stocks that were not in existence as early as 1970.
### Supported Tasks and Leaderboards
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
No split has currently been created for the dataset.
## Dataset Creation
### Curation Rationale
### Source Data
URL
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
---
dataset_info:
features:
- name: MMM
dtype: float64
- name: AOS
dtype: float64
- name: ABT
dtype: float64
- name: ABBV
dtype: float64
- name: ABMD
dtype: float64
- name: ACN
dtype: float64
- name: ATVI
dtype: float64
- name: ADM
dtype: float64
- name: ADBE
dtype: float64
- name: ADP
dtype: float64
- name: AAP
dtype: float64
- name: A
dtype: float64
- name: APD
dtype: float64
- name: AKAM
dtype: float64
- name: ALK
dtype: float64
- name: ALB
dtype: float64
- name: ARE
dtype: float64
- name: ALGN
dtype: float64
- name: ALLE
dtype: float64
- name: LNT
dtype: float64
- name: GOOG
dtype: float64
- name: MO
dtype: float64
- name: AMZN
dtype: float64
- name: AMD
dtype: float64
- name: AEE
dtype: float64
- name: AAL
dtype: float64
- name: AEP
dtype: float64
- name: AXP
dtype: float64
- name: AIG
dtype: float64
- name: AMT
dtype: float64
- name: AWK
dtype: float64
- name: AMP
dtype: float64
- name: ABC
dtype: float64
- name: AME
dtype: float64
- name: AMGN
dtype: float64
- name: APH
dtype: float64
- name: ADI
dtype: float64
- name: AON
dtype: float64
- name: APA
dtype: float64
- name: AAPL
dtype: float64
- name: AMAT
dtype: float64
- name: ANET
dtype: float64
- name: AJG
dtype: float64
- name: AIZ
dtype: float64
- name: T
dtype: float64
- name: ATO
dtype: float64
- name: ADSK
dtype: float64
- name: AZO
dtype: float64
- name: AVB
dtype: float64
- name: AVY
dtype: float64
- name: BAC
dtype: float64
- name: BAX
dtype: float64
- name: BDX
dtype: float64
- name: WRB
dtype: float64
- name: BBY
dtype: float64
- name: BIO
dtype: float64
- name: BIIB
dtype: float64
- name: BLK
dtype: float64
- name: BK
dtype: float64
- name: BA
dtype: float64
- name: BWA
dtype: float64
- name: BXP
dtype: float64
- name: BSX
dtype: float64
- name: BMY
dtype: float64
- name: AVGO
dtype: float64
- name: BR
dtype: float64
- name: BRO
dtype: float64
- name: CHRW
dtype: float64
- name: CDNS
dtype: float64
- name: CZR
dtype: float64
- name: CPT
dtype: float64
- name: CPB
dtype: float64
- name: COF
dtype: float64
- name: CAH
dtype: float64
- name: KMX
dtype: float64
- name: CAT
dtype: float64
- name: CBOE
dtype: float64
- name: CDW
dtype: float64
- name: CNC
dtype: float64
- name: CNP
dtype: float64
- name: CF
dtype: float64
- name: CRL
dtype: float64
- name: SCHW
dtype: float64
- name: CHTR
dtype: float64
- name: CMG
dtype: float64
- name: CB
dtype: float64
- name: CHD
dtype: float64
- name: CINF
dtype: float64
- name: CTAS
dtype: float64
- name: CSCO
dtype: float64
- name: C
dtype: float64
- name: CFG
dtype: float64
- name: CLX
dtype: float64
- name: CME
dtype: float64
- name: CMS
dtype: float64
- name: KO
dtype: float64
- name: CTSH
dtype: float64
- name: CL
dtype: float64
- name: CMCSA
dtype: float64
- name: CAG
dtype: float64
- name: COP
dtype: float64
- name: ED
dtype: float64
- name: COO
dtype: float64
- name: CPRT
dtype: float64
- name: GLW
dtype: float64
- name: CSGP
dtype: float64
- name: COST
dtype: float64
- name: CCI
dtype: float64
- name: CMI
dtype: float64
- name: DHI
dtype: float64
- name: DRI
dtype: float64
- name: DVA
dtype: float64
- name: DE
dtype: float64
- name: DAL
dtype: float64
- name: DVN
dtype: float64
- name: DXCM
dtype: float64
- name: FANG
dtype: float64
- name: DLR
dtype: float64
- name: DFS
dtype: float64
- name: DISH
dtype: float64
- name: DIS
dtype: float64
- name: DG
dtype: float64
- name: DLTR
dtype: float64
- name: D
dtype: float64
- name: DPZ
dtype: float64
- name: DOV
dtype: float64
- name: DOW
dtype: float64
- name: DTE
dtype: float64
- name: DD
dtype: float64
- name: EMN
dtype: float64
- name: ETN
dtype: float64
- name: EBAY
dtype: float64
- name: ECL
dtype: float64
- name: EIX
dtype: float64
- name: EW
dtype: float64
- name: EA
dtype: float64
- name: LLY
dtype: float64
- name: EMR
dtype: float64
- name: ENPH
dtype: float64
- name: EOG
dtype: float64
- name: EPAM
dtype: float64
- name: EFX
dtype: float64
- name: EQIX
dtype: float64
- name: EQR
dtype: float64
- name: ESS
dtype: float64
- name: EL
dtype: float64
- name: RE
dtype: float64
- name: ES
dtype: float64
- name: EXC
dtype: float64
- name: EXPE
dtype: float64
- name: EXPD
dtype: float64
- name: EXR
dtype: float64
- name: XOM
dtype: float64
- name: FFIV
dtype: float64
- name: FDS
dtype: float64
- name: FAST
dtype: float64
- name: FRT
dtype: float64
- name: FDX
dtype: float64
- name: FITB
dtype: float64
- name: FRC
dtype: float64
- name: FE
dtype: float64
- name: FIS
dtype: float64
- name: FISV
dtype: float64
- name: FLT
dtype: float64
- name: FMC
dtype: float64
- name: F
dtype: float64
- name: FTNT
dtype: float64
- name: FBHS
dtype: float64
- name: FOXA
dtype: float64
- name: BEN
dtype: float64
- name: FCX
dtype: float64
- name: GRMN
dtype: float64
- name: IT
dtype: float64
- name: GNRC
dtype: float64
- name: GD
dtype: float64
- name: GE
dtype: float64
- name: GIS
dtype: float64
- name: GM
dtype: float64
- name: GPC
dtype: float64
- name: GILD
dtype: float64
- name: GPN
dtype: float64
- name: HAL
dtype: float64
- name: HIG
dtype: float64
- name: HAS
dtype: float64
- name: HCA
dtype: float64
- name: HSIC
dtype: float64
- name: HSY
dtype: float64
- name: HES
dtype: float64
- name: HPE
dtype: float64
- name: HLT
dtype: float64
- name: HOLX
dtype: float64
- name: HD
dtype: float64
- name: HON
dtype: float64
- name: HRL
dtype: float64
- name: HST
dtype: float64
- name: HPQ
dtype: float64
- name: HUM
dtype: float64
- name: HBAN
dtype: float64
- name: HII
dtype: float64
- name: IBM
dtype: float64
- name: IEX
dtype: float64
- name: IDXX
dtype: float64
- name: ITW
dtype: float64
- name: ILMN
dtype: float64
- name: INCY
dtype: float64
- name: IR
dtype: float64
- name: INTC
dtype: float64
- name: ICE
dtype: float64
- name: IP
dtype: float64
- name: IPG
dtype: float64
- name: IFF
dtype: float64
- name: INTU
dtype: float64
- name: ISRG
dtype: float64
- name: IVZ
dtype: float64
- name: IRM
dtype: float64
- name: JBHT
dtype: float64
- name: JKHY
dtype: float64
- name: JNJ
dtype: float64
- name: JCI
dtype: float64
- name: JPM
dtype: float64
- name: JNPR
dtype: float64
- name: K
dtype: float64
- name: KEY
dtype: float64
- name: KEYS
dtype: float64
- name: KMB
dtype: float64
- name: KIM
dtype: float64
- name: KLAC
dtype: float64
- name: KHC
dtype: float64
- name: KR
dtype: float64
- name: LH
dtype: float64
- name: LRCX
dtype: float64
- name: LVS
dtype: float64
- name: LDOS
dtype: float64
- name: LNC
dtype: float64
- name: LYV
dtype: float64
- name: LKQ
dtype: float64
- name: LMT
dtype: float64
- name: LOW
dtype: float64
- name: LYB
dtype: float64
- name: MRO
dtype: float64
- name: MPC
dtype: float64
- name: MKTX
dtype: float64
- name: MAR
dtype: float64
- name: MMC
dtype: float64
- name: MLM
dtype: float64
- name: MA
dtype: float64
- name: MKC
dtype: float64
- name: MCD
dtype: float64
- name: MCK
dtype: float64
- name: MDT
dtype: float64
- name: MRK
dtype: float64
- name: MET
dtype: float64
- name: MTD
dtype: float64
- name: MGM
dtype: float64
- name: MCHP
dtype: float64
- name: MU
dtype: float64
- name: MSFT
dtype: float64
- name: MAA
dtype: float64
- name: MHK
dtype: float64
- name: MOH
dtype: float64
- name: TAP
dtype: float64
- name: MDLZ
dtype: float64
- name: MPWR
dtype: float64
- name: MNST
dtype: float64
- name: MCO
dtype: float64
- name: MOS
dtype: float64
- name: MSI
dtype: float64
- name: MSCI
dtype: float64
- name: NDAQ
dtype: float64
- name: NTAP
dtype: float64
- name: NFLX
dtype: float64
- name: NWL
dtype: float64
- name: NEM
dtype: float64
- name: NWSA
dtype: float64
- name: NEE
dtype: float64
- name: NI
dtype: float64
- name: NDSN
dtype: float64
- name: NSC
dtype: float64
- name: NTRS
dtype: float64
- name: NOC
dtype: float64
- name: NCLH
dtype: float64
- name: NRG
dtype: float64
- name: NVDA
dtype: float64
- name: NVR
dtype: float64
- name: NXPI
dtype: float64
- name: ORLY
dtype: float64
- name: OXY
dtype: float64
- name: ODFL
dtype: float64
- name: OMC
dtype: float64
- name: OKE
dtype: float64
- name: PCAR
dtype: float64
- name: PKG
dtype: float64
- name: PH
dtype: float64
- name: PAYX
dtype: float64
- name: PAYC
dtype: float64
- name: PNR
dtype: float64
- name: PEP
dtype: float64
- name: PKI
dtype: float64
- name: PFE
dtype: float64
- name: PM
dtype: float64
- name: PSX
dtype: float64
- name: PNW
dtype: float64
- name: PXD
dtype: float64
- name: PNC
dtype: float64
- name: POOL
dtype: float64
- name: PPG
dtype: float64
- name: PFG
dtype: float64
- name: PG
dtype: float64
- name: PLD
dtype: float64
- name: PRU
dtype: float64
- name: PEG
dtype: float64
- name: PTC
dtype: float64
- name: PHM
dtype: float64
- name: QRVO
dtype: float64
- name: PWR
dtype: float64
- name: QCOM
dtype: float64
- name: DGX
dtype: float64
- name: RL
dtype: float64
- name: RJF
dtype: float64
- name: O
dtype: float64
- name: REG
dtype: float64
- name: REGN
dtype: float64
- name: RF
dtype: float64
- name: RSG
dtype: float64
- name: RMD
dtype: float64
- name: RHI
dtype: float64
- name: ROK
dtype: float64
- name: ROL
dtype: float64
- name: ROP
dtype: float64
- name: ROST
dtype: float64
- name: RCL
dtype: float64
- name: CRM
dtype: float64
- name: SBAC
dtype: float64
- name: SLB
dtype: float64
- name: STX
dtype: float64
- name: SEE
dtype: float64
- name: SRE
dtype: float64
- name: NOW
dtype: float64
- name: SHW
dtype: float64
- name: SBNY
dtype: float64
- name: SPG
dtype: float64
- name: SWKS
dtype: float64
- name: SO
dtype: float64
- name: LUV
dtype: float64
- name: SWK
dtype: float64
- name: SBUX
dtype: float64
- name: STT
dtype: float64
- name: SYK
dtype: float64
- name: SIVB
dtype: float64
- name: SYF
dtype: float64
- name: SNPS
dtype: float64
- name: TMUS
dtype: float64
- name: TROW
dtype: float64
- name: TTWO
dtype: float64
- name: TRGP
dtype: float64
- name: TEL
dtype: float64
- name: TDY
dtype: float64
- name: TSLA
dtype: float64
- name: TXN
dtype: float64
- name: TXT
dtype: float64
- name: TMO
dtype: float64
- name: TJX
dtype: float64
- name: TSCO
dtype: float64
- name: TDG
dtype: float64
- name: TRV
dtype: float64
- name: TYL
dtype: float64
- name: TSN
dtype: float64
- name: USB
dtype: float64
- name: UDR
dtype: float64
- name: ULTA
dtype: float64
- name: UNP
dtype: float64
- name: UAL
dtype: float64
- name: UPS
dtype: float64
- name: URI
dtype: float64
- name: UNH
dtype: float64
- name: UHS
dtype: float64
- name: VTR
dtype: float64
- name: VRSN
dtype: float64
- name: VRSK
dtype: float64
- name: VZ
dtype: float64
- name: VRTX
dtype: float64
- name: VFC
dtype: float64
- name: V
dtype: float64
- name: VMC
dtype: float64
- name: WAB
dtype: float64
- name: WBA
dtype: float64
- name: WMT
dtype: float64
- name: WM
dtype: float64
- name: WAT
dtype: float64
- name: WEC
dtype: float64
- name: WFC
dtype: float64
- name: WST
dtype: float64
- name: WDC
dtype: float64
- name: WRK
dtype: float64
- name: WY
dtype: float64
- name: WHR
dtype: float64
- name: WMB
dtype: float64
- name: WTW
dtype: float64
- name: GWW
dtype: float64
- name: WYNN
dtype: float64
- name: XEL
dtype: float64
- name: XYL
dtype: float64
- name: YUM
dtype: float64
- name: ZBRA
dtype: float64
- name: ZBH
dtype: float64
- name: ZION
dtype: float64
- name: ZTS
dtype: float64
- name: Date
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 44121086
num_examples: 13322
download_size: 0
dataset_size: 44121086
---
# Dataset Card for "sp500"
More Information needed | [
"# Dataset Card for S&P 500 Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:\n\nThis Dataset was created by combining the daily close prices for each stock in the current (as of 10/29/2022) S&P 500 index dating back to January 1, 1970. This data came from the Kaggle dataset (URL and was aggregated using PANDAS before being converted to a HuggingFace Dataset.",
"### Dataset Summary\n\nThis dataset has 407 columns specifying dates and associated close prices of the stocks in the S&P 500 that had data which could be accessed from the above Kaggle dataset. 94 stocks are missing due to issues loading that stock data into the dataset (i.e. stock name changes (like FB to META)). These items will need further review. There are many NA values due to stocks that were not in existence as early as 1970.",
"### Supported Tasks and Leaderboards",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\nNo split has currently been created for the dataset.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\nURL",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset.\n\n---\ndataset_info:\n features:\n - name: MMM\n dtype: float64\n - name: AOS\n dtype: float64\n - name: ABT\n dtype: float64\n - name: ABBV\n dtype: float64\n - name: ABMD\n dtype: float64\n - name: ACN\n dtype: float64\n - name: ATVI\n dtype: float64\n - name: ADM\n dtype: float64\n - name: ADBE\n dtype: float64\n - name: ADP\n dtype: float64\n - name: AAP\n dtype: float64\n - name: A\n dtype: float64\n - name: APD\n dtype: float64\n - name: AKAM\n dtype: float64\n - name: ALK\n dtype: float64\n - name: ALB\n dtype: float64\n - name: ARE\n dtype: float64\n - name: ALGN\n dtype: float64\n - name: ALLE\n dtype: float64\n - name: LNT\n dtype: float64\n - name: GOOG\n dtype: float64\n - name: MO\n dtype: float64\n - name: AMZN\n dtype: float64\n - name: AMD\n dtype: float64\n - name: AEE\n dtype: float64\n - name: AAL\n dtype: float64\n - name: AEP\n dtype: float64\n - name: AXP\n dtype: float64\n - name: AIG\n dtype: float64\n - name: AMT\n dtype: float64\n - name: AWK\n dtype: float64\n - name: AMP\n dtype: float64\n - name: ABC\n dtype: float64\n - name: AME\n dtype: float64\n - name: AMGN\n dtype: float64\n - name: APH\n dtype: float64\n - name: ADI\n dtype: float64\n - name: AON\n dtype: float64\n - name: APA\n dtype: float64\n - name: AAPL\n dtype: float64\n - name: AMAT\n dtype: float64\n - name: ANET\n dtype: float64\n - name: AJG\n dtype: float64\n - name: AIZ\n dtype: float64\n - name: T\n dtype: float64\n - name: ATO\n dtype: float64\n - name: ADSK\n dtype: float64\n - name: AZO\n dtype: float64\n - name: AVB\n dtype: float64\n - name: AVY\n dtype: float64\n - name: BAC\n dtype: float64\n - name: BAX\n dtype: float64\n - name: BDX\n dtype: float64\n - name: WRB\n dtype: float64\n - name: BBY\n dtype: float64\n - name: BIO\n dtype: float64\n - name: BIIB\n dtype: float64\n - name: BLK\n dtype: float64\n - name: BK\n dtype: float64\n - name: BA\n dtype: float64\n - name: BWA\n dtype: float64\n - name: BXP\n dtype: float64\n - name: BSX\n dtype: float64\n - name: BMY\n dtype: float64\n - name: AVGO\n dtype: float64\n - name: BR\n dtype: float64\n - name: BRO\n dtype: float64\n - name: CHRW\n dtype: float64\n - name: CDNS\n dtype: float64\n - name: CZR\n dtype: float64\n - name: CPT\n dtype: float64\n - name: CPB\n dtype: float64\n - name: COF\n dtype: float64\n - name: CAH\n dtype: float64\n - name: KMX\n dtype: float64\n - name: CAT\n dtype: float64\n - name: CBOE\n dtype: float64\n - name: CDW\n dtype: float64\n - name: CNC\n dtype: float64\n - name: CNP\n dtype: float64\n - name: CF\n dtype: float64\n - name: CRL\n dtype: float64\n - name: SCHW\n dtype: float64\n - name: CHTR\n dtype: float64\n - name: CMG\n dtype: float64\n - name: CB\n dtype: float64\n - name: CHD\n dtype: float64\n - name: CINF\n dtype: float64\n - name: CTAS\n dtype: float64\n - name: CSCO\n dtype: float64\n - name: C\n dtype: float64\n - name: CFG\n dtype: float64\n - name: CLX\n dtype: float64\n - name: CME\n dtype: float64\n - name: CMS\n dtype: float64\n - name: KO\n dtype: float64\n - name: CTSH\n dtype: float64\n - name: CL\n dtype: float64\n - name: CMCSA\n dtype: float64\n - name: CAG\n dtype: float64\n - name: COP\n dtype: float64\n - name: ED\n dtype: float64\n - name: COO\n dtype: float64\n - name: CPRT\n dtype: float64\n - name: GLW\n dtype: float64\n - name: CSGP\n dtype: float64\n - name: COST\n dtype: float64\n - name: CCI\n dtype: float64\n - name: CMI\n dtype: float64\n - name: DHI\n dtype: float64\n - name: DRI\n dtype: float64\n - name: DVA\n dtype: float64\n - name: DE\n dtype: float64\n - name: DAL\n dtype: float64\n - name: DVN\n dtype: float64\n - name: DXCM\n dtype: float64\n - name: FANG\n dtype: float64\n - name: DLR\n dtype: float64\n - name: DFS\n dtype: float64\n - name: DISH\n dtype: float64\n - name: DIS\n dtype: float64\n - name: DG\n dtype: float64\n - name: DLTR\n dtype: float64\n - name: D\n dtype: float64\n - name: DPZ\n dtype: float64\n - name: DOV\n dtype: float64\n - name: DOW\n dtype: float64\n - name: DTE\n dtype: float64\n - name: DD\n dtype: float64\n - name: EMN\n dtype: float64\n - name: ETN\n dtype: float64\n - name: EBAY\n dtype: float64\n - name: ECL\n dtype: float64\n - name: EIX\n dtype: float64\n - name: EW\n dtype: float64\n - name: EA\n dtype: float64\n - name: LLY\n dtype: float64\n - name: EMR\n dtype: float64\n - name: ENPH\n dtype: float64\n - name: EOG\n dtype: float64\n - name: EPAM\n dtype: float64\n - name: EFX\n dtype: float64\n - name: EQIX\n dtype: float64\n - name: EQR\n dtype: float64\n - name: ESS\n dtype: float64\n - name: EL\n dtype: float64\n - name: RE\n dtype: float64\n - name: ES\n dtype: float64\n - name: EXC\n dtype: float64\n - name: EXPE\n dtype: float64\n - name: EXPD\n dtype: float64\n - name: EXR\n dtype: float64\n - name: XOM\n dtype: float64\n - name: FFIV\n dtype: float64\n - name: FDS\n dtype: float64\n - name: FAST\n dtype: float64\n - name: FRT\n dtype: float64\n - name: FDX\n dtype: float64\n - name: FITB\n dtype: float64\n - name: FRC\n dtype: float64\n - name: FE\n dtype: float64\n - name: FIS\n dtype: float64\n - name: FISV\n dtype: float64\n - name: FLT\n dtype: float64\n - name: FMC\n dtype: float64\n - name: F\n dtype: float64\n - name: FTNT\n dtype: float64\n - name: FBHS\n dtype: float64\n - name: FOXA\n dtype: float64\n - name: BEN\n dtype: float64\n - name: FCX\n dtype: float64\n - name: GRMN\n dtype: float64\n - name: IT\n dtype: float64\n - name: GNRC\n dtype: float64\n - name: GD\n dtype: float64\n - name: GE\n dtype: float64\n - name: GIS\n dtype: float64\n - name: GM\n dtype: float64\n - name: GPC\n dtype: float64\n - name: GILD\n dtype: float64\n - name: GPN\n dtype: float64\n - name: HAL\n dtype: float64\n - name: HIG\n dtype: float64\n - name: HAS\n dtype: float64\n - name: HCA\n dtype: float64\n - name: HSIC\n dtype: float64\n - name: HSY\n dtype: float64\n - name: HES\n dtype: float64\n - name: HPE\n dtype: float64\n - name: HLT\n dtype: float64\n - name: HOLX\n dtype: float64\n - name: HD\n dtype: float64\n - name: HON\n dtype: float64\n - name: HRL\n dtype: float64\n - name: HST\n dtype: float64\n - name: HPQ\n dtype: float64\n - name: HUM\n dtype: float64\n - name: HBAN\n dtype: float64\n - name: HII\n dtype: float64\n - name: IBM\n dtype: float64\n - name: IEX\n dtype: float64\n - name: IDXX\n dtype: float64\n - name: ITW\n dtype: float64\n - name: ILMN\n dtype: float64\n - name: INCY\n dtype: float64\n - name: IR\n dtype: float64\n - name: INTC\n dtype: float64\n - name: ICE\n dtype: float64\n - name: IP\n dtype: float64\n - name: IPG\n dtype: float64\n - name: IFF\n dtype: float64\n - name: INTU\n dtype: float64\n - name: ISRG\n dtype: float64\n - name: IVZ\n dtype: float64\n - name: IRM\n dtype: float64\n - name: JBHT\n dtype: float64\n - name: JKHY\n dtype: float64\n - name: JNJ\n dtype: float64\n - name: JCI\n dtype: float64\n - name: JPM\n dtype: float64\n - name: JNPR\n dtype: float64\n - name: K\n dtype: float64\n - name: KEY\n dtype: float64\n - name: KEYS\n dtype: float64\n - name: KMB\n dtype: float64\n - name: KIM\n dtype: float64\n - name: KLAC\n dtype: float64\n - name: KHC\n dtype: float64\n - name: KR\n dtype: float64\n - name: LH\n dtype: float64\n - name: LRCX\n dtype: float64\n - name: LVS\n dtype: float64\n - name: LDOS\n dtype: float64\n - name: LNC\n dtype: float64\n - name: LYV\n dtype: float64\n - name: LKQ\n dtype: float64\n - name: LMT\n dtype: float64\n - name: LOW\n dtype: float64\n - name: LYB\n dtype: float64\n - name: MRO\n dtype: float64\n - name: MPC\n dtype: float64\n - name: MKTX\n dtype: float64\n - name: MAR\n dtype: float64\n - name: MMC\n dtype: float64\n - name: MLM\n dtype: float64\n - name: MA\n dtype: float64\n - name: MKC\n dtype: float64\n - name: MCD\n dtype: float64\n - name: MCK\n dtype: float64\n - name: MDT\n dtype: float64\n - name: MRK\n dtype: float64\n - name: MET\n dtype: float64\n - name: MTD\n dtype: float64\n - name: MGM\n dtype: float64\n - name: MCHP\n dtype: float64\n - name: MU\n dtype: float64\n - name: MSFT\n dtype: float64\n - name: MAA\n dtype: float64\n - name: MHK\n dtype: float64\n - name: MOH\n dtype: float64\n - name: TAP\n dtype: float64\n - name: MDLZ\n dtype: float64\n - name: MPWR\n dtype: float64\n - name: MNST\n dtype: float64\n - name: MCO\n dtype: float64\n - name: MOS\n dtype: float64\n - name: MSI\n dtype: float64\n - name: MSCI\n dtype: float64\n - name: NDAQ\n dtype: float64\n - name: NTAP\n dtype: float64\n - name: NFLX\n dtype: float64\n - name: NWL\n dtype: float64\n - name: NEM\n dtype: float64\n - name: NWSA\n dtype: float64\n - name: NEE\n dtype: float64\n - name: NI\n dtype: float64\n - name: NDSN\n dtype: float64\n - name: NSC\n dtype: float64\n - name: NTRS\n dtype: float64\n - name: NOC\n dtype: float64\n - name: NCLH\n dtype: float64\n - name: NRG\n dtype: float64\n - name: NVDA\n dtype: float64\n - name: NVR\n dtype: float64\n - name: NXPI\n dtype: float64\n - name: ORLY\n dtype: float64\n - name: OXY\n dtype: float64\n - name: ODFL\n dtype: float64\n - name: OMC\n dtype: float64\n - name: OKE\n dtype: float64\n - name: PCAR\n dtype: float64\n - name: PKG\n dtype: float64\n - name: PH\n dtype: float64\n - name: PAYX\n dtype: float64\n - name: PAYC\n dtype: float64\n - name: PNR\n dtype: float64\n - name: PEP\n dtype: float64\n - name: PKI\n dtype: float64\n - name: PFE\n dtype: float64\n - name: PM\n dtype: float64\n - name: PSX\n dtype: float64\n - name: PNW\n dtype: float64\n - name: PXD\n dtype: float64\n - name: PNC\n dtype: float64\n - name: POOL\n dtype: float64\n - name: PPG\n dtype: float64\n - name: PFG\n dtype: float64\n - name: PG\n dtype: float64\n - name: PLD\n dtype: float64\n - name: PRU\n dtype: float64\n - name: PEG\n dtype: float64\n - name: PTC\n dtype: float64\n - name: PHM\n dtype: float64\n - name: QRVO\n dtype: float64\n - name: PWR\n dtype: float64\n - name: QCOM\n dtype: float64\n - name: DGX\n dtype: float64\n - name: RL\n dtype: float64\n - name: RJF\n dtype: float64\n - name: O\n dtype: float64\n - name: REG\n dtype: float64\n - name: REGN\n dtype: float64\n - name: RF\n dtype: float64\n - name: RSG\n dtype: float64\n - name: RMD\n dtype: float64\n - name: RHI\n dtype: float64\n - name: ROK\n dtype: float64\n - name: ROL\n dtype: float64\n - name: ROP\n dtype: float64\n - name: ROST\n dtype: float64\n - name: RCL\n dtype: float64\n - name: CRM\n dtype: float64\n - name: SBAC\n dtype: float64\n - name: SLB\n dtype: float64\n - name: STX\n dtype: float64\n - name: SEE\n dtype: float64\n - name: SRE\n dtype: float64\n - name: NOW\n dtype: float64\n - name: SHW\n dtype: float64\n - name: SBNY\n dtype: float64\n - name: SPG\n dtype: float64\n - name: SWKS\n dtype: float64\n - name: SO\n dtype: float64\n - name: LUV\n dtype: float64\n - name: SWK\n dtype: float64\n - name: SBUX\n dtype: float64\n - name: STT\n dtype: float64\n - name: SYK\n dtype: float64\n - name: SIVB\n dtype: float64\n - name: SYF\n dtype: float64\n - name: SNPS\n dtype: float64\n - name: TMUS\n dtype: float64\n - name: TROW\n dtype: float64\n - name: TTWO\n dtype: float64\n - name: TRGP\n dtype: float64\n - name: TEL\n dtype: float64\n - name: TDY\n dtype: float64\n - name: TSLA\n dtype: float64\n - name: TXN\n dtype: float64\n - name: TXT\n dtype: float64\n - name: TMO\n dtype: float64\n - name: TJX\n dtype: float64\n - name: TSCO\n dtype: float64\n - name: TDG\n dtype: float64\n - name: TRV\n dtype: float64\n - name: TYL\n dtype: float64\n - name: TSN\n dtype: float64\n - name: USB\n dtype: float64\n - name: UDR\n dtype: float64\n - name: ULTA\n dtype: float64\n - name: UNP\n dtype: float64\n - name: UAL\n dtype: float64\n - name: UPS\n dtype: float64\n - name: URI\n dtype: float64\n - name: UNH\n dtype: float64\n - name: UHS\n dtype: float64\n - name: VTR\n dtype: float64\n - name: VRSN\n dtype: float64\n - name: VRSK\n dtype: float64\n - name: VZ\n dtype: float64\n - name: VRTX\n dtype: float64\n - name: VFC\n dtype: float64\n - name: V\n dtype: float64\n - name: VMC\n dtype: float64\n - name: WAB\n dtype: float64\n - name: WBA\n dtype: float64\n - name: WMT\n dtype: float64\n - name: WM\n dtype: float64\n - name: WAT\n dtype: float64\n - name: WEC\n dtype: float64\n - name: WFC\n dtype: float64\n - name: WST\n dtype: float64\n - name: WDC\n dtype: float64\n - name: WRK\n dtype: float64\n - name: WY\n dtype: float64\n - name: WHR\n dtype: float64\n - name: WMB\n dtype: float64\n - name: WTW\n dtype: float64\n - name: GWW\n dtype: float64\n - name: WYNN\n dtype: float64\n - name: XEL\n dtype: float64\n - name: XYL\n dtype: float64\n - name: YUM\n dtype: float64\n - name: ZBRA\n dtype: float64\n - name: ZBH\n dtype: float64\n - name: ZION\n dtype: float64\n - name: ZTS\n dtype: float64\n - name: Date\n dtype: timestamp[ns]\n splits:\n - name: train\n num_bytes: 44121086\n num_examples: 13322\n download_size: 0\n dataset_size: 44121086\n---",
"# Dataset Card for \"sp500\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for S&P 500 Dataset",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:\n\nThis Dataset was created by combining the daily close prices for each stock in the current (as of 10/29/2022) S&P 500 index dating back to January 1, 1970. This data came from the Kaggle dataset (URL and was aggregated using PANDAS before being converted to a HuggingFace Dataset.",
"### Dataset Summary\n\nThis dataset has 407 columns specifying dates and associated close prices of the stocks in the S&P 500 that had data which could be accessed from the above Kaggle dataset. 94 stocks are missing due to issues loading that stock data into the dataset (i.e. stock name changes (like FB to META)). These items will need further review. There are many NA values due to stocks that were not in existence as early as 1970.",
"### Supported Tasks and Leaderboards",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\nNo split has currently been created for the dataset.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\nURL",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset.\n\n---\ndataset_info:\n features:\n - name: MMM\n dtype: float64\n - name: AOS\n dtype: float64\n - name: ABT\n dtype: float64\n - name: ABBV\n dtype: float64\n - name: ABMD\n dtype: float64\n - name: ACN\n dtype: float64\n - name: ATVI\n dtype: float64\n - name: ADM\n dtype: float64\n - name: ADBE\n dtype: float64\n - name: ADP\n dtype: float64\n - name: AAP\n dtype: float64\n - name: A\n dtype: float64\n - name: APD\n dtype: float64\n - name: AKAM\n dtype: float64\n - name: ALK\n dtype: float64\n - name: ALB\n dtype: float64\n - name: ARE\n dtype: float64\n - name: ALGN\n dtype: float64\n - name: ALLE\n dtype: float64\n - name: LNT\n dtype: float64\n - name: GOOG\n dtype: float64\n - name: MO\n dtype: float64\n - name: AMZN\n dtype: float64\n - name: AMD\n dtype: float64\n - name: AEE\n dtype: float64\n - name: AAL\n dtype: float64\n - name: AEP\n dtype: float64\n - name: AXP\n dtype: float64\n - name: AIG\n dtype: float64\n - name: AMT\n dtype: float64\n - name: AWK\n dtype: float64\n - name: AMP\n dtype: float64\n - name: ABC\n dtype: float64\n - name: AME\n dtype: float64\n - name: AMGN\n dtype: float64\n - name: APH\n dtype: float64\n - name: ADI\n dtype: float64\n - name: AON\n dtype: float64\n - name: APA\n dtype: float64\n - name: AAPL\n dtype: float64\n - name: AMAT\n dtype: float64\n - name: ANET\n dtype: float64\n - name: AJG\n dtype: float64\n - name: AIZ\n dtype: float64\n - name: T\n dtype: float64\n - name: ATO\n dtype: float64\n - name: ADSK\n dtype: float64\n - name: AZO\n dtype: float64\n - name: AVB\n dtype: float64\n - name: AVY\n dtype: float64\n - name: BAC\n dtype: float64\n - name: BAX\n dtype: float64\n - name: BDX\n dtype: float64\n - name: WRB\n dtype: float64\n - name: BBY\n dtype: float64\n - name: BIO\n dtype: float64\n - name: BIIB\n dtype: float64\n - name: BLK\n dtype: float64\n - name: BK\n dtype: float64\n - name: BA\n dtype: float64\n - name: BWA\n dtype: float64\n - name: BXP\n dtype: float64\n - name: BSX\n dtype: float64\n - name: BMY\n dtype: float64\n - name: AVGO\n dtype: float64\n - name: BR\n dtype: float64\n - name: BRO\n dtype: float64\n - name: CHRW\n dtype: float64\n - name: CDNS\n dtype: float64\n - name: CZR\n dtype: float64\n - name: CPT\n dtype: float64\n - name: CPB\n dtype: float64\n - name: COF\n dtype: float64\n - name: CAH\n dtype: float64\n - name: KMX\n dtype: float64\n - name: CAT\n dtype: float64\n - name: CBOE\n dtype: float64\n - name: CDW\n dtype: float64\n - name: CNC\n dtype: float64\n - name: CNP\n dtype: float64\n - name: CF\n dtype: float64\n - name: CRL\n dtype: float64\n - name: SCHW\n dtype: float64\n - name: CHTR\n dtype: float64\n - name: CMG\n dtype: float64\n - name: CB\n dtype: float64\n - name: CHD\n dtype: float64\n - name: CINF\n dtype: float64\n - name: CTAS\n dtype: float64\n - name: CSCO\n dtype: float64\n - name: C\n dtype: float64\n - name: CFG\n dtype: float64\n - name: CLX\n dtype: float64\n - name: CME\n dtype: float64\n - name: CMS\n dtype: float64\n - name: KO\n dtype: float64\n - name: CTSH\n dtype: float64\n - name: CL\n dtype: float64\n - name: CMCSA\n dtype: float64\n - name: CAG\n dtype: float64\n - name: COP\n dtype: float64\n - name: ED\n dtype: float64\n - name: COO\n dtype: float64\n - name: CPRT\n dtype: float64\n - name: GLW\n dtype: float64\n - name: CSGP\n dtype: float64\n - name: COST\n dtype: float64\n - name: CCI\n dtype: float64\n - name: CMI\n dtype: float64\n - name: DHI\n dtype: float64\n - name: DRI\n dtype: float64\n - name: DVA\n dtype: float64\n - name: DE\n dtype: float64\n - name: DAL\n dtype: float64\n - name: DVN\n dtype: float64\n - name: DXCM\n dtype: float64\n - name: FANG\n dtype: float64\n - name: DLR\n dtype: float64\n - name: DFS\n dtype: float64\n - name: DISH\n dtype: float64\n - name: DIS\n dtype: float64\n - name: DG\n dtype: float64\n - name: DLTR\n dtype: float64\n - name: D\n dtype: float64\n - name: DPZ\n dtype: float64\n - name: DOV\n dtype: float64\n - name: DOW\n dtype: float64\n - name: DTE\n dtype: float64\n - name: DD\n dtype: float64\n - name: EMN\n dtype: float64\n - name: ETN\n dtype: float64\n - name: EBAY\n dtype: float64\n - name: ECL\n dtype: float64\n - name: EIX\n dtype: float64\n - name: EW\n dtype: float64\n - name: EA\n dtype: float64\n - name: LLY\n dtype: float64\n - name: EMR\n dtype: float64\n - name: ENPH\n dtype: float64\n - name: EOG\n dtype: float64\n - name: EPAM\n dtype: float64\n - name: EFX\n dtype: float64\n - name: EQIX\n dtype: float64\n - name: EQR\n dtype: float64\n - name: ESS\n dtype: float64\n - name: EL\n dtype: float64\n - name: RE\n dtype: float64\n - name: ES\n dtype: float64\n - name: EXC\n dtype: float64\n - name: EXPE\n dtype: float64\n - name: EXPD\n dtype: float64\n - name: EXR\n dtype: float64\n - name: XOM\n dtype: float64\n - name: FFIV\n dtype: float64\n - name: FDS\n dtype: float64\n - name: FAST\n dtype: float64\n - name: FRT\n dtype: float64\n - name: FDX\n dtype: float64\n - name: FITB\n dtype: float64\n - name: FRC\n dtype: float64\n - name: FE\n dtype: float64\n - name: FIS\n dtype: float64\n - name: FISV\n dtype: float64\n - name: FLT\n dtype: float64\n - name: FMC\n dtype: float64\n - name: F\n dtype: float64\n - name: FTNT\n dtype: float64\n - name: FBHS\n dtype: float64\n - name: FOXA\n dtype: float64\n - name: BEN\n dtype: float64\n - name: FCX\n dtype: float64\n - name: GRMN\n dtype: float64\n - name: IT\n dtype: float64\n - name: GNRC\n dtype: float64\n - name: GD\n dtype: float64\n - name: GE\n dtype: float64\n - name: GIS\n dtype: float64\n - name: GM\n dtype: float64\n - name: GPC\n dtype: float64\n - name: GILD\n dtype: float64\n - name: GPN\n dtype: float64\n - name: HAL\n dtype: float64\n - name: HIG\n dtype: float64\n - name: HAS\n dtype: float64\n - name: HCA\n dtype: float64\n - name: HSIC\n dtype: float64\n - name: HSY\n dtype: float64\n - name: HES\n dtype: float64\n - name: HPE\n dtype: float64\n - name: HLT\n dtype: float64\n - name: HOLX\n dtype: float64\n - name: HD\n dtype: float64\n - name: HON\n dtype: float64\n - name: HRL\n dtype: float64\n - name: HST\n dtype: float64\n - name: HPQ\n dtype: float64\n - name: HUM\n dtype: float64\n - name: HBAN\n dtype: float64\n - name: HII\n dtype: float64\n - name: IBM\n dtype: float64\n - name: IEX\n dtype: float64\n - name: IDXX\n dtype: float64\n - name: ITW\n dtype: float64\n - name: ILMN\n dtype: float64\n - name: INCY\n dtype: float64\n - name: IR\n dtype: float64\n - name: INTC\n dtype: float64\n - name: ICE\n dtype: float64\n - name: IP\n dtype: float64\n - name: IPG\n dtype: float64\n - name: IFF\n dtype: float64\n - name: INTU\n dtype: float64\n - name: ISRG\n dtype: float64\n - name: IVZ\n dtype: float64\n - name: IRM\n dtype: float64\n - name: JBHT\n dtype: float64\n - name: JKHY\n dtype: float64\n - name: JNJ\n dtype: float64\n - name: JCI\n dtype: float64\n - name: JPM\n dtype: float64\n - name: JNPR\n dtype: float64\n - name: K\n dtype: float64\n - name: KEY\n dtype: float64\n - name: KEYS\n dtype: float64\n - name: KMB\n dtype: float64\n - name: KIM\n dtype: float64\n - name: KLAC\n dtype: float64\n - name: KHC\n dtype: float64\n - name: KR\n dtype: float64\n - name: LH\n dtype: float64\n - name: LRCX\n dtype: float64\n - name: LVS\n dtype: float64\n - name: LDOS\n dtype: float64\n - name: LNC\n dtype: float64\n - name: LYV\n dtype: float64\n - name: LKQ\n dtype: float64\n - name: LMT\n dtype: float64\n - name: LOW\n dtype: float64\n - name: LYB\n dtype: float64\n - name: MRO\n dtype: float64\n - name: MPC\n dtype: float64\n - name: MKTX\n dtype: float64\n - name: MAR\n dtype: float64\n - name: MMC\n dtype: float64\n - name: MLM\n dtype: float64\n - name: MA\n dtype: float64\n - name: MKC\n dtype: float64\n - name: MCD\n dtype: float64\n - name: MCK\n dtype: float64\n - name: MDT\n dtype: float64\n - name: MRK\n dtype: float64\n - name: MET\n dtype: float64\n - name: MTD\n dtype: float64\n - name: MGM\n dtype: float64\n - name: MCHP\n dtype: float64\n - name: MU\n dtype: float64\n - name: MSFT\n dtype: float64\n - name: MAA\n dtype: float64\n - name: MHK\n dtype: float64\n - name: MOH\n dtype: float64\n - name: TAP\n dtype: float64\n - name: MDLZ\n dtype: float64\n - name: MPWR\n dtype: float64\n - name: MNST\n dtype: float64\n - name: MCO\n dtype: float64\n - name: MOS\n dtype: float64\n - name: MSI\n dtype: float64\n - name: MSCI\n dtype: float64\n - name: NDAQ\n dtype: float64\n - name: NTAP\n dtype: float64\n - name: NFLX\n dtype: float64\n - name: NWL\n dtype: float64\n - name: NEM\n dtype: float64\n - name: NWSA\n dtype: float64\n - name: NEE\n dtype: float64\n - name: NI\n dtype: float64\n - name: NDSN\n dtype: float64\n - name: NSC\n dtype: float64\n - name: NTRS\n dtype: float64\n - name: NOC\n dtype: float64\n - name: NCLH\n dtype: float64\n - name: NRG\n dtype: float64\n - name: NVDA\n dtype: float64\n - name: NVR\n dtype: float64\n - name: NXPI\n dtype: float64\n - name: ORLY\n dtype: float64\n - name: OXY\n dtype: float64\n - name: ODFL\n dtype: float64\n - name: OMC\n dtype: float64\n - name: OKE\n dtype: float64\n - name: PCAR\n dtype: float64\n - name: PKG\n dtype: float64\n - name: PH\n dtype: float64\n - name: PAYX\n dtype: float64\n - name: PAYC\n dtype: float64\n - name: PNR\n dtype: float64\n - name: PEP\n dtype: float64\n - name: PKI\n dtype: float64\n - name: PFE\n dtype: float64\n - name: PM\n dtype: float64\n - name: PSX\n dtype: float64\n - name: PNW\n dtype: float64\n - name: PXD\n dtype: float64\n - name: PNC\n dtype: float64\n - name: POOL\n dtype: float64\n - name: PPG\n dtype: float64\n - name: PFG\n dtype: float64\n - name: PG\n dtype: float64\n - name: PLD\n dtype: float64\n - name: PRU\n dtype: float64\n - name: PEG\n dtype: float64\n - name: PTC\n dtype: float64\n - name: PHM\n dtype: float64\n - name: QRVO\n dtype: float64\n - name: PWR\n dtype: float64\n - name: QCOM\n dtype: float64\n - name: DGX\n dtype: float64\n - name: RL\n dtype: float64\n - name: RJF\n dtype: float64\n - name: O\n dtype: float64\n - name: REG\n dtype: float64\n - name: REGN\n dtype: float64\n - name: RF\n dtype: float64\n - name: RSG\n dtype: float64\n - name: RMD\n dtype: float64\n - name: RHI\n dtype: float64\n - name: ROK\n dtype: float64\n - name: ROL\n dtype: float64\n - name: ROP\n dtype: float64\n - name: ROST\n dtype: float64\n - name: RCL\n dtype: float64\n - name: CRM\n dtype: float64\n - name: SBAC\n dtype: float64\n - name: SLB\n dtype: float64\n - name: STX\n dtype: float64\n - name: SEE\n dtype: float64\n - name: SRE\n dtype: float64\n - name: NOW\n dtype: float64\n - name: SHW\n dtype: float64\n - name: SBNY\n dtype: float64\n - name: SPG\n dtype: float64\n - name: SWKS\n dtype: float64\n - name: SO\n dtype: float64\n - name: LUV\n dtype: float64\n - name: SWK\n dtype: float64\n - name: SBUX\n dtype: float64\n - name: STT\n dtype: float64\n - name: SYK\n dtype: float64\n - name: SIVB\n dtype: float64\n - name: SYF\n dtype: float64\n - name: SNPS\n dtype: float64\n - name: TMUS\n dtype: float64\n - name: TROW\n dtype: float64\n - name: TTWO\n dtype: float64\n - name: TRGP\n dtype: float64\n - name: TEL\n dtype: float64\n - name: TDY\n dtype: float64\n - name: TSLA\n dtype: float64\n - name: TXN\n dtype: float64\n - name: TXT\n dtype: float64\n - name: TMO\n dtype: float64\n - name: TJX\n dtype: float64\n - name: TSCO\n dtype: float64\n - name: TDG\n dtype: float64\n - name: TRV\n dtype: float64\n - name: TYL\n dtype: float64\n - name: TSN\n dtype: float64\n - name: USB\n dtype: float64\n - name: UDR\n dtype: float64\n - name: ULTA\n dtype: float64\n - name: UNP\n dtype: float64\n - name: UAL\n dtype: float64\n - name: UPS\n dtype: float64\n - name: URI\n dtype: float64\n - name: UNH\n dtype: float64\n - name: UHS\n dtype: float64\n - name: VTR\n dtype: float64\n - name: VRSN\n dtype: float64\n - name: VRSK\n dtype: float64\n - name: VZ\n dtype: float64\n - name: VRTX\n dtype: float64\n - name: VFC\n dtype: float64\n - name: V\n dtype: float64\n - name: VMC\n dtype: float64\n - name: WAB\n dtype: float64\n - name: WBA\n dtype: float64\n - name: WMT\n dtype: float64\n - name: WM\n dtype: float64\n - name: WAT\n dtype: float64\n - name: WEC\n dtype: float64\n - name: WFC\n dtype: float64\n - name: WST\n dtype: float64\n - name: WDC\n dtype: float64\n - name: WRK\n dtype: float64\n - name: WY\n dtype: float64\n - name: WHR\n dtype: float64\n - name: WMB\n dtype: float64\n - name: WTW\n dtype: float64\n - name: GWW\n dtype: float64\n - name: WYNN\n dtype: float64\n - name: XEL\n dtype: float64\n - name: XYL\n dtype: float64\n - name: YUM\n dtype: float64\n - name: ZBRA\n dtype: float64\n - name: ZBH\n dtype: float64\n - name: ZION\n dtype: float64\n - name: ZTS\n dtype: float64\n - name: Date\n dtype: timestamp[ns]\n splits:\n - name: train\n num_bytes: 44121086\n num_examples: 13322\n download_size: 0\n dataset_size: 44121086\n---",
"# Dataset Card for \"sp500\"\n\nMore Information needed"
] |
d77d7ad3c624c51030f2f32c83e892b3d620b3d4 |
# Dataset Card for ProsocialDialog Dataset
## Dataset Description
- **Repository:** [Dataset and Model](https://github.com/skywalker023/prosocial-dialog)
- **Paper:** [ProsocialDialog: A Prosocial Backbone for Conversational Agents](https://aclanthology.org/2022.emnlp-main.267/)
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])
## Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
## Supported Tasks
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
## Languages
English
## Dataset Structure
### Data Attributes
attribute | type | description
--- | --- | ---
`context` | str | the potentially unsafe utterance
`response` | str | the guiding utterance grounded on rules-of-thumb (`rots`)
`rots` | list of str\|null | the relevant rules-of-thumb for `text` *not* labeled as \_\_casual\_\_
`safety_label` | str | the final verdict of the context according to `safety_annotations`: {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
`safety_annotations` | list of str | raw annotations from three workers: {casual, needs caution, needs intervention}
`safety_annotation_reasons` | list of str | the reasons behind the safety annotations in free-form text from each worker
`source` | str | the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics_amt, ethics_reddit}
`etc` | str\|null | other information
`dialogue_id` | int | the dialogue index
`response_id` | int | the response index
`episode_done` | bool | an indicator of whether it is the end of the dialogue
## Dataset Creation
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2205.12688).
## Additional Information
### Citation
Please cite our work if you found the resources in this repository useful:
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` | allenai/prosocial-dialog | [
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:original",
"source_datasets:extended|social_bias_frames",
"language:en",
"license:cc-by-4.0",
"dialogue",
"dialogue safety",
"social norm",
"rules-of-thumb",
"arxiv:2205.12688",
"region:us"
] | 2022-10-30T04:24:12+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "machine-generated"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "100K<n<1M"], "source_datasets": ["original", "extended|social_bias_frames"], "task_categories": ["conversational", "text-classification"], "task_ids": ["dialogue-generation", "multi-class-classification"], "pretty_name": "ProsocialDialog", "tags": ["dialogue", "dialogue safety", "social norm", "rules-of-thumb"]} | 2023-02-03T07:58:29+00:00 | [
"2205.12688"
] | [
"en"
] | TAGS
#task_categories-conversational #task_categories-text-classification #task_ids-dialogue-generation #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-100K<n<1M #source_datasets-original #source_datasets-extended|social_bias_frames #language-English #license-cc-by-4.0 #dialogue #dialogue safety #social norm #rules-of-thumb #arxiv-2205.12688 #region-us
| Dataset Card for ProsocialDialog Dataset
========================================
Dataset Description
-------------------
* Repository: Dataset and Model
* Paper: ProsocialDialog: A Prosocial Backbone for Conversational Agents
* Point of Contact: Hyunwoo Kim
Dataset Summary
---------------
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
Supported Tasks
---------------
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
Languages
---------
English
Dataset Structure
-----------------
### Data Attributes
attribute: 'context', type: str, description: the potentially unsafe utterance
attribute: 'response', type: str, description: the guiding utterance grounded on rules-of-thumb ('rots')
attribute: 'rots', type: list of str|null, description: the relevant rules-of-thumb for 'text' *not* labeled as \_\_casual\_\_
attribute: 'safety\_label', type: str, description: the final verdict of the context according to 'safety\_annotations': {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
attribute: 'safety\_annotations', type: list of str, description: raw annotations from three workers: {casual, needs caution, needs intervention}
attribute: 'safety\_annotation\_reasons', type: list of str, description: the reasons behind the safety annotations in free-form text from each worker
attribute: 'source', type: str, description: the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics\_amt, ethics\_reddit}
attribute: 'etc', type: str|null, description: other information
attribute: 'dialogue\_id', type: int, description: the dialogue index
attribute: 'response\_id', type: int, description: the response index
attribute: 'episode\_done', type: bool, description: an indicator of whether it is the end of the dialogue
Dataset Creation
----------------
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our paper.
Additional Information
----------------------
Please cite our work if you found the resources in this repository useful:
| [
"### Data Attributes\n\n\nattribute: 'context', type: str, description: the potentially unsafe utterance\nattribute: 'response', type: str, description: the guiding utterance grounded on rules-of-thumb ('rots')\nattribute: 'rots', type: list of str|null, description: the relevant rules-of-thumb for 'text' *not* labeled as \\_\\_casual\\_\\_\nattribute: 'safety\\_label', type: str, description: the final verdict of the context according to 'safety\\_annotations': {\\_\\_casual\\_\\_, \\_\\_possibly\\_needs\\_caution\\_\\_, \\_\\_probably\\_needs\\_caution\\_\\_, \\_\\_needs\\_caution\\_\\_, \\_\\_needs\\_intervention\\_\\_}\nattribute: 'safety\\_annotations', type: list of str, description: raw annotations from three workers: {casual, needs caution, needs intervention}\nattribute: 'safety\\_annotation\\_reasons', type: list of str, description: the reasons behind the safety annotations in free-form text from each worker\nattribute: 'source', type: str, description: the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics\\_amt, ethics\\_reddit}\nattribute: 'etc', type: str|null, description: other information\nattribute: 'dialogue\\_id', type: int, description: the dialogue index\nattribute: 'response\\_id', type: int, description: the response index\nattribute: 'episode\\_done', type: bool, description: an indicator of whether it is the end of the dialogue\n\n\nDataset Creation\n----------------\n\n\nTo create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).",
"### Further Details, Social Impacts, and Limitations\n\n\nPlease refer to our paper.\n\n\nAdditional Information\n----------------------\n\n\nPlease cite our work if you found the resources in this repository useful:"
] | [
"TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-dialogue-generation #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-100K<n<1M #source_datasets-original #source_datasets-extended|social_bias_frames #language-English #license-cc-by-4.0 #dialogue #dialogue safety #social norm #rules-of-thumb #arxiv-2205.12688 #region-us \n",
"### Data Attributes\n\n\nattribute: 'context', type: str, description: the potentially unsafe utterance\nattribute: 'response', type: str, description: the guiding utterance grounded on rules-of-thumb ('rots')\nattribute: 'rots', type: list of str|null, description: the relevant rules-of-thumb for 'text' *not* labeled as \\_\\_casual\\_\\_\nattribute: 'safety\\_label', type: str, description: the final verdict of the context according to 'safety\\_annotations': {\\_\\_casual\\_\\_, \\_\\_possibly\\_needs\\_caution\\_\\_, \\_\\_probably\\_needs\\_caution\\_\\_, \\_\\_needs\\_caution\\_\\_, \\_\\_needs\\_intervention\\_\\_}\nattribute: 'safety\\_annotations', type: list of str, description: raw annotations from three workers: {casual, needs caution, needs intervention}\nattribute: 'safety\\_annotation\\_reasons', type: list of str, description: the reasons behind the safety annotations in free-form text from each worker\nattribute: 'source', type: str, description: the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics\\_amt, ethics\\_reddit}\nattribute: 'etc', type: str|null, description: other information\nattribute: 'dialogue\\_id', type: int, description: the dialogue index\nattribute: 'response\\_id', type: int, description: the response index\nattribute: 'episode\\_done', type: bool, description: an indicator of whether it is the end of the dialogue\n\n\nDataset Creation\n----------------\n\n\nTo create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).",
"### Further Details, Social Impacts, and Limitations\n\n\nPlease refer to our paper.\n\n\nAdditional Information\n----------------------\n\n\nPlease cite our work if you found the resources in this repository useful:"
] |
3e9c4eb6eb75d1a72396ab005bcd0abdcf319060 | # Dataset Card for "sroie_document_understanding"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
This dataset is an enriched version of SROIE 2019 dataset with additional labels for line descriptions and line totals for OCR and layout understanding.
## Dataset Structure
```python
DatasetDict({
train: Dataset({
features: ['image', 'ocr'],
num_rows: 652
})
})
```
### Data Fields
```python
{
'image': PIL Image object,
'ocr': [
# text box 1
{
'box': [[float, float], [float, float], [float, float], [float, float]],
'label': str, # "other" | "company" | "address" | "date" | "line_description" | "line_total" | "total"
'text': str
},
...
# text box N
{
'box': [[float, float], [float, float], [float, float], [float, float]],
'label': str,
'text': str,
}
]
}
```
## Dataset Creation
### Source Data
The dataset was obtained from [ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction](https://rrc.cvc.uab.es/?ch=13)
### Annotations
#### Annotation process
Additional labels to receipt line items were added using open source [labelme](https://github.com/wkentaro/labelme) tool.
#### Who are the annotators?
Arvind Rajan (adding labels to the original text boxes from source)
## Additional Information
### Licensing Information
MIT License
### Contributions
Thanks to [@arvindrajan92](https://github.com/arvindrajan92) for adding this dataset. | arvindrajan92/sroie_document_understanding | [
"license:mit",
"region:us"
] | 2022-10-30T04:49:57+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ocr", "list": [{"name": "box", "sequence": {"sequence": "float64"}}, {"name": "label", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 267317016.0, "num_examples": 652}], "download_size": 217146103, "dataset_size": 267317016.0}} | 2022-10-30T06:30:53+00:00 | [] | [] | TAGS
#license-mit #region-us
| # Dataset Card for "sroie_document_understanding"
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Data Fields
- Dataset Creation
- Source Data
- Annotations
- Additional Information
- Licensing Information
- Contributions
## Dataset Description
This dataset is an enriched version of SROIE 2019 dataset with additional labels for line descriptions and line totals for OCR and layout understanding.
## Dataset Structure
### Data Fields
## Dataset Creation
### Source Data
The dataset was obtained from ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction
### Annotations
#### Annotation process
Additional labels to receipt line items were added using open source labelme tool.
#### Who are the annotators?
Arvind Rajan (adding labels to the original text boxes from source)
## Additional Information
### Licensing Information
MIT License
### Contributions
Thanks to @arvindrajan92 for adding this dataset. | [
"# Dataset Card for \"sroie_document_understanding\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Source Data\n - Annotations\n- Additional Information\n - Licensing Information\n - Contributions",
"## Dataset Description\n\nThis dataset is an enriched version of SROIE 2019 dataset with additional labels for line descriptions and line totals for OCR and layout understanding.",
"## Dataset Structure",
"### Data Fields",
"## Dataset Creation",
"### Source Data\n\nThe dataset was obtained from ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction",
"### Annotations",
"#### Annotation process\n\nAdditional labels to receipt line items were added using open source labelme tool.",
"#### Who are the annotators?\n\nArvind Rajan (adding labels to the original text boxes from source)",
"## Additional Information",
"### Licensing Information\n\nMIT License",
"### Contributions\n\nThanks to @arvindrajan92 for adding this dataset."
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for \"sroie_document_understanding\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Source Data\n - Annotations\n- Additional Information\n - Licensing Information\n - Contributions",
"## Dataset Description\n\nThis dataset is an enriched version of SROIE 2019 dataset with additional labels for line descriptions and line totals for OCR and layout understanding.",
"## Dataset Structure",
"### Data Fields",
"## Dataset Creation",
"### Source Data\n\nThe dataset was obtained from ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction",
"### Annotations",
"#### Annotation process\n\nAdditional labels to receipt line items were added using open source labelme tool.",
"#### Who are the annotators?\n\nArvind Rajan (adding labels to the original text boxes from source)",
"## Additional Information",
"### Licensing Information\n\nMIT License",
"### Contributions\n\nThanks to @arvindrajan92 for adding this dataset."
] |
06581f273fd26b82fb36eecb48ddda298564f29f |
Free Fonts for Simplified Chinese, downloaded from [Google Fonts](https://fonts.google.com/?subset=chinese-simplified). | breezedeus/openfonts | [
"license:ofl-1.1",
"region:us"
] | 2022-10-30T06:29:57+00:00 | {"license": "ofl-1.1"} | 2022-10-30T06:37:11+00:00 | [] | [] | TAGS
#license-ofl-1.1 #region-us
|
Free Fonts for Simplified Chinese, downloaded from Google Fonts. | [] | [
"TAGS\n#license-ofl-1.1 #region-us \n"
] |
1a267499f05a2ada702cca61e9caf6ce4ed0cd6d | noticias medioambiente | api19750904/efeverde | [
"region:us"
] | 2022-10-30T09:29:19+00:00 | {} | 2022-10-30T09:30:29+00:00 | [] | [] | TAGS
#region-us
| noticias medioambiente | [] | [
"TAGS\n#region-us \n"
] |
a8f3bebe787e1b70a2bc5d3f6025b414a2eb4467 |
# Wlop Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by wlop_style"```
Use the Embedding with one of [SirVeggies](https://huggingface.co/SirVeggie) Wlop models for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ImByEK5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/BndPSqd.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/4cB2B28.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Hw5FMID.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ddwJwoO.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/wlop_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-30T09:36:54+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-03T23:34:09+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Wlop Style Embedding / Textual Inversion
========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
Use the Embedding with one of SirVeggies Wlop models for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
3971d9415584a57e6564fcc83310433c52a7bb82 |
# Torino Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by torino_art"```
If it is to strong just add [] around it.
Trained until 12800 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/xnRZgRb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/AcHsCMX.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/egIlKhy.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/nZQh3da.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/V9UFqn2.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/torino_art | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-30T09:47:07+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-30T09:53:46+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Torino Artist Embedding / Textual Inversion
===========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 12800 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
6892e2e8f10b7b385041ec817f024c8dfa4cbad2 | # Dataset Card for "answerable_tydiqa_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_raw | [
"region:us"
] | 2022-10-30T10:18:47+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21022889, "num_examples": 29868}, {"name": "validation", "num_bytes": 2616173, "num_examples": 3712}], "download_size": 16292808, "dataset_size": 23639062}} | 2022-10-30T10:19:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_raw"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_raw\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_raw\"\n\nMore Information needed"
] |
5dc06479106fbe781b1d1bb3c5da16ae4f3fdde0 | # Dataset Card for "answerable_tydiqa_raw_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_raw_split | [
"region:us"
] | 2022-10-30T10:19:23+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 32809511, "num_examples": 129290}, {"name": "validation", "num_bytes": 4034498, "num_examples": 15801}], "download_size": 17092210, "dataset_size": 36844009}} | 2022-10-30T10:19:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_raw_split"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_raw_split\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_raw_split\"\n\nMore Information needed"
] |
9e80f0e386c0c307eea98787ffa2dc558105cbfb | # Dataset Card for "answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6 | [
"region:us"
] | 2022-10-30T10:23:50+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21022889, "num_examples": 29868}, {"name": "validation", "num_bytes": 2616173, "num_examples": 3712}], "download_size": 16292808, "dataset_size": 23639062}} | 2022-10-30T10:24:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6\"\n\nMore Information needed"
] |
629ddc29395be3b5f982d8daf6d12731d7364931 | # Dataset Card for "answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PartiallyTyped/answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4 | [
"region:us"
] | 2022-10-30T10:26:11+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "seq_id", "dtype": "string"}, {"name": "golds", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 32809511, "num_examples": 129290}, {"name": "validation", "num_bytes": 4034498, "num_examples": 15801}], "download_size": 17092210, "dataset_size": 36844009}} | 2022-10-30T10:26:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4"
More Information needed | [
"# Dataset Card for \"answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4\"\n\nMore Information needed"
] |
c663a7a901ed9bfe086d513ce9de7aa2dbea5680 |
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by assassin_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by assassin_style-6500</em></li>
<li>10,000 steps <em>Usage: art by assassin_style-10000</em> </li>
<li>15,000 steps <em>Usage: art by assassin_style </em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/RhE7Qce.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/wVOH8GU.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/YaBbNNK.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/63HpAf1.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/nrkCPEf.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/nrkCPEf.jpg"></a>
| zZWipeoutZz/assassin_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-30T11:37:45+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-30T13:00:51+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| #### Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
*art by assassin\_style*
add **[ ]** around it to reduce its weight.
#### Included Files
* 6500 steps *Usage: art by assassin\_style-6500*
* 10,000 steps *Usage: art by assassin\_style-10000*
* 15,000 steps *Usage: art by assassin\_style*
cheers
Wipeout
#### Example Pictures
#### prompt comparison
*click the image to enlarge*
[<img height="50%" width="50%" src="https://i.URL](https://i.URL target=) | [
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by assassin\\_style* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by assassin\\_style-6500*\n* 10,000 steps *Usage: art by assassin\\_style-10000*\n* 15,000 steps *Usage: art by assassin\\_style*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n *click the image to enlarge*\n[<img height=\"50%\" width=\"50%\" src=\"https://i.URL](https://i.URL target=)"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by assassin\\_style* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by assassin\\_style-6500*\n* 10,000 steps *Usage: art by assassin\\_style-10000*\n* 15,000 steps *Usage: art by assassin\\_style*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n *click the image to enlarge*\n[<img height=\"50%\" width=\"50%\" src=\"https://i.URL](https://i.URL target=)"
] |
f62c99bb7b1c00254d300679172802b400281cfe |
Movielens 20m data with split training and test set by userId for GAUC.
More details could be see at:
https://github.com/auxten/edgeRec/blob/main/example/movielens/readme.md
## User split
user split status in `user` table, see SQL below:
```sql
create table movies
(
movieId INTEGER,
title TEXT,
genres TEXT
);
create table ratings
(
userId INTEGER,
movieId INTEGER,
rating FLOAT,
timestamp INTEGER
);
create table tags
(
userId INTEGER,
movieId INTEGER,
tag TEXT,
timestamp INTEGER
);
-- import data from csv, do it with any tool
select count(distinct userId) from ratings; -- 138,493 users
create table user as select distinct userId, 0 as is_train from ratings;
-- choose 100000 random user as train user
update user
set is_train = 1
where userId in
(SELECT userId
FROM (select distinct userId from ratings)
ORDER BY RANDOM()
LIMIT 100000);
select count(*) from user where is_train != 1; -- 38,493 test users
-- split train and test set of movielens-20m ratings
create table ratings_train as
select r.userId, movieId, rating, timestamp
from ratings r
left join user u on r.userId = u.userId
where is_train = 1;
create table ratings_test as
select r.userId, movieId, rating, timestamp
from ratings r
left join user u on r.userId = u.userId
where is_train = 0;
select count(*) from ratings_train; --14,393,526
select count(*) from ratings_test; --5,606,737
select count(*) from ratings; --20,000,263
```
## User feature
`user_feature_train` and `user_feature_test` are pre-processed user feature
see SQL below:
```sql
-- user feature prepare
create table user_feature_train as
select r1.userId, ugenres, avgRating, cntRating
from
(
select userId, avg(rating) as avgRating,
count(rating) cntRating
from ratings_train r1 group by userId
) r1 left join (
select userId,
group_concat(genres) as ugenres
from ratings_train r
left join movies t2 on r.movieId = t2.movieId
where r.rating > 3.5
group by userId
) r2 on r2.userId = r1.userId
-- user feature prepare
create table user_feature_test as
select r1.userId, ugenres, avgRating, cntRating
from
(
select userId, avg(rating) as avgRating,
count(rating) cntRating
from ratings_test r1 group by userId
) r1 left join (
select userId,
group_concat(genres) as ugenres
from ratings_test r
left join movies t2 on r.movieId = t2.movieId
where r.rating > 3.5
group by userId
) r2 on r2.userId = r1.userId
```
## User behavior
```sql
create table ub_train as
select userId, group_concat(movieId) movieIds ,group_concat(timestamp) timestamps from ratings_train_desc group by userId order by timestamp
create table ub_test as
select userId, group_concat(movieId) movieIds ,group_concat(timestamp) timestamps from ratings_test_desc group by userId order by timestamp
create table ratings_train_desc as
select r.userId, movieId, rating, timestamp
from ratings_train r order by r.userId, timestamp desc;
create table ratings_test_desc as
select r.userId, movieId, rating, timestamp
from ratings_test r order by r.userId, timestamp desc;
```
| auxten/movielens-20m | [
"license:apache-2.0",
"region:us"
] | 2022-10-30T13:47:43+00:00 | {"license": "apache-2.0"} | 2022-10-30T13:57:36+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
Movielens 20m data with split training and test set by userId for GAUC.
More details could be see at:
URL
## User split
user split status in 'user' table, see SQL below:
## User feature
'user_feature_train' and 'user_feature_test' are pre-processed user feature
see SQL below:
## User behavior
| [
"## User split\nuser split status in 'user' table, see SQL below:",
"## User feature\n'user_feature_train' and 'user_feature_test' are pre-processed user feature\nsee SQL below:",
"## User behavior"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"## User split\nuser split status in 'user' table, see SQL below:",
"## User feature\n'user_feature_train' and 'user_feature_test' are pre-processed user feature\nsee SQL below:",
"## User behavior"
] |
b5c56fd50f5993b1cebb86586d286981ec05ae72 |
# Dataset Card for "lmqg/qg_annotation"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the annotated questions generated by different models, used to measure the correlation of automatic metrics against
human in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```python
{
"correctness": 1.8,
"grammaticality": 3.0,
"understandability": 2.4,
"prediction": "What trade did the Ming dynasty have a shortage of?",
"Bleu_4": 0.4961682999359617,
"METEOR": 0.3572683356086923,
"ROUGE_L": 0.7272727272727273,
"BERTScore": 0.9142221808433532,
"MoverScore": 0.6782580808848975,
"reference_raw": "What important trade did the Ming Dynasty have with Tibet?",
"answer_raw": "horse trade",
"paragraph_raw": "Some scholars note that Tibetan leaders during the Ming frequently engaged in civil war and conducted their own foreign diplomacy with neighboring states such as Nepal. Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet. Others argue that the significant religious nature of the relationship of the Ming court with Tibetan lamas is underrepresented in modern scholarship. In hopes of reviving the unique relationship of the earlier Mongol leader Kublai Khan (r. 1260\u20131294) and his spiritual superior Drog\u00f6n Ch\u00f6gyal Phagpa (1235\u20131280) of the Sakya school of Tibetan Buddhism, the Yongle Emperor (r. 1402\u20131424) made a concerted effort to build a secular and religious alliance with Deshin Shekpa (1384\u20131415), the Karmapa of the Karma Kagyu school. However, the Yongle Emperor's attempts were unsuccessful.",
"sentence_raw": "Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet.",
"reference_norm": "what important trade did the ming dynasty have with tibet ?",
"model": "T5 Large"
}
```
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qg_annotation | [
"multilinguality:monolingual",
"size_categories:<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | 2022-10-30T14:26:50+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "<1K", "pretty_name": "QG Annotation"} | 2022-10-30T15:08:30+00:00 | [
"2210.03992"
] | [
"en"
] | TAGS
#multilinguality-monolingual #size_categories-<1K #language-English #license-cc-by-4.0 #arxiv-2210.03992 #region-us
|
# Dataset Card for "lmqg/qg_annotation"
## Dataset Description
- Repository: URL
- Paper: URL
- Point of Contact: Asahi Ushio
### Dataset Summary
This is the annotated questions generated by different models, used to measure the correlation of automatic metrics against
human in "Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference".
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
| [
"# Dataset Card for \"lmqg/qg_annotation\"",
"## Dataset Description\n- Repository: URL\n- Paper: URL\n- Point of Contact: Asahi Ushio",
"### Dataset Summary\nThis is the annotated questions generated by different models, used to measure the correlation of automatic metrics against \nhuman in \"Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference\".",
"### Languages\nEnglish (en)",
"## Dataset Structure\nAn example of 'train' looks as follows."
] | [
"TAGS\n#multilinguality-monolingual #size_categories-<1K #language-English #license-cc-by-4.0 #arxiv-2210.03992 #region-us \n",
"# Dataset Card for \"lmqg/qg_annotation\"",
"## Dataset Description\n- Repository: URL\n- Paper: URL\n- Point of Contact: Asahi Ushio",
"### Dataset Summary\nThis is the annotated questions generated by different models, used to measure the correlation of automatic metrics against \nhuman in \"Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference\".",
"### Languages\nEnglish (en)",
"## Dataset Structure\nAn example of 'train' looks as follows."
] |
66bebf8a6d23d46f11d9528c9b9c01cad0a78d2d | efeverde | api19750904/efeverde_5_cat_lem | [
"region:us"
] | 2022-10-30T17:42:40+00:00 | {} | 2022-10-30T17:43:32+00:00 | [] | [] | TAGS
#region-us
| efeverde | [] | [
"TAGS\n#region-us \n"
] |
339ce0d6a41439bac7b42fd71405e68253ed1dbf |
# Dataset Card for LILA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Tutorial](#tutorial)
- [Working with Taxonomies](#working-with-taxonomies)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lila.science/
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single [taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
This data set consists of only camera trap image data sets, whereas the broader [LILA](lila.science/) website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available [here](https://beerys.github.io/CaltechCameraTraps/).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [email protected].
If you use this data set, please cite the associated manuscript:
```bibtex
@inproceedings{DBLP:conf/eccv/BeeryHP18,
author = {Sara Beery and
Grant Van Horn and
Pietro Perona},
title = {Recognition in Terra Incognita},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich,
Germany, September 8-14, 2018, Proceedings, Part {XVI}},
pages = {472--489},
year = {2018},
crossref = {DBLP:conf/eccv/2018-16},
url = {https://doi.org/10.1007/978-3-030-01270-0\_28},
doi = {10.1007/978-3-030-01270-0\_28},
timestamp = {Mon, 08 Oct 2018 17:08:07 +0200},
biburl = {https://dblp.org/rec/bib/conf/eccv/BeeryHP18},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{yousif2019dynamic,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]).
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{zhang2016animal,
title={Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification},
author={Zhang, Zhi and He, Zhihai and Cao, Guitao and Cao, Wenming},
journal={IEEE Transactions on Multimedia},
volume={18},
number={10},
pages={2079--2092},
year={2016},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]) and [Zhi Zhang]([email protected]).
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{tabak2019machine,
title={Machine learning to classify animal species in camera trap images: Applications in ecology},
author={Tabak, Michael A and Norouzzadeh, Mohammad S and Wolfson, David W and Sweeney, Steven J and VerCauteren, Kurt C and Snow, Nathan P and Halseth, Joseph M and Di Salvo, Paul A and Lewis, Jesse S and White, Michael D and others},
journal={Methods in Ecology and Evolution},
volume={10},
number={4},
pages={585--590},
year={2019},
publisher={Wiley Online Library}
}
```
For questions about this data set, contact [[email protected]]([email protected]).
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the [Wildlife Conservation Society](https://www.wcs.org/). The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set [on the LILA website](https://lila.science/datasets/wcscameratraps).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{anton2018monitoring,
title={Monitoring the mammalian fauna of urban areas using remote cameras and citizen science},
author={Anton, Victor and Hartley, Stephen and Geldenhuis, Andre and Wittmer, Heiko U},
journal={Journal of Urban Ecology},
volume={4},
number={1},
pages={juy002},
year={2018},
publisher={Oxford University Press}
}
```
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [Victor Anton]([email protected]).
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact [David Will]([email protected]) at Island Conservation.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact [Nathaniel Rindlaub]([email protected]) at The Nature Conservancy.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the [Snapshot Serengeti project](https://snapshotserengeti.org/) -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshotserengeti-v-2-0/SnapshotSerengeti_S1-11_v2.1.species_list.csv). We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
```bibtex
@misc{dryad_5pt92,
title = {Data from: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna},
author = {Swanson, AB and Kosmala, M and Lintott, CJ and Simpson, RJ and Smith, A and Packer, C},
year = {2015},
journal = {Scientific Data},
URL = {https://doi.org/10.5061/dryad.5pt92},
doi = {doi:10.5061/dryad.5pt92},
publisher = {Dryad Digital Repository}
}
```
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the [Snapshot Karoo](https://www.zooniverse.org/projects/shuebner729/snapshot-karoo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KAR/SnapshotKaroo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the [Snapshot Kgalagadi](https://www.zooniverse.org/projects/shuebner729/snapshot-kgalagadi/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KGA/SnapshotKgalagadi_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the [Snapshot Enonkishu](https://www.zooniverse.org/projects/aguthmann/snapshot-enonkishu) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/ENO/SnapshotEnonkishu_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the [Snapshot Camdeboo](https://www.zooniverse.org/projects/shuebner729/snapshot-camdeboo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/CDB/SnapshotCamdeboo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the [Snapshot Mountain Zebra](https://www.zooniverse.org/projects/meredithspalmer/snapshot-mountain-zebra/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/MTZ/SnapshotMountainZebra_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the [Snapshot Kruger](https://www.zooniverse.org/projects/shuebner729/snapshot-kruger) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KRU/SnapshotKruger_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.
For questions about this data set, contact [email protected].
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying [GitHub repository](https://github.com/julianavelez1/Processing-Camera-Trap-Data-Using-AI).
If you use these data in a publication or report, please use the following citation:
```bibtex
@article{velez2022choosing,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
For questions about this data set, contact [Juliana Velez Gomez]([email protected]).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/) is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in [COCO Camera Traps](https://github.com/Microsoft/CameraTraps/blob/master/data_management/README.md#coco-cameratraps-format) format.
All of the datasets share a common category taxonomy, which is defined on the [LILA website](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
### Data Fields
Different datasets may have slightly varying fields, which include:
`file_name`: the file name \
`width` and `height`: the dimensions of the image \
`study`: which research study the image was collected as part of \
`location` : the name of the location at which the image was taken \
`annotations`: information about image annotation, which includes the taxonomy information, bounding box/boxes (`bbox`/`bboxes`) if any, as well as any other annotation information. \
`image` : the `path` to download the image and any other information that is available, e.g. its size in `bytes`.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the [LILA maintainers](mailto:[email protected]), since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Tutorial
The [tutorial in this Google Colab notebook](https://colab.research.google.com/drive/17gPOIK-ksxPyX6yP9TaKIimlwf9DYe2R?usp=sharing) demonstrates how to work with this dataset, including filtering by species, collating configurations, and downloading images.
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
```python
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
# Filters to show only cats
cats = dataset.filter(lambda x: x["annotations"]["taxonomy"][0]["species"] == taxonomy["species"].str2int("felis catus"))
```
The original common names have been saved with their taxonomy mappings in this repository in `common_names_to_tax.json`. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Island Conservation Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
sea_turtle = LILA_COMMON_NAMES_TO_TAXONOMY.loc["sea turtle"].to_dict()
sea_turtle = {k: taxonomy[k].str2int(v) if v is not None else v for k, v in sea_turtle.items()} # Map to ClassLabel integers
sea_turtle_dataset = ds.filter(lambda x: x["annotations"]["taxonomy"][0] == sea_turtle)
```
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
random_entry = dataset.shuffle()[0]
filter_taxonomy = random_entry["annotations"]["taxonomy"][0]
filter_keys = list(map(lambda x: (x[0], taxonomy[x[0]].int2str(x[1])), filter(lambda x: x[1] is not None, list(filter_taxonomy.items()))))
if len(filter_keys) > 0:
print(LILA_COMMON_NAMES_TO_TAXONOMY[np.logical_and.reduce([
LILA_COMMON_NAMES_TO_TAXONOMY[k] == v for k,v in filter_keys
])])
else:
print("No common name found for the item.")
```
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/). Check the details of the specific dataset you are using in its section above.
### Citation Information
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to [@NimaBoscarino](https://github.com/NimaBoscarino/) for adding this dataset.
| society-ethics/lila_camera_traps | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"biodiversity",
"camera trap data",
"wildlife monitoring",
"region:us"
] | 2022-10-30T22:34:29+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "LILA Camera Traps", "tags": ["biodiversity", "camera trap data", "wildlife monitoring"]} | 2023-03-07T20:14:40+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #biodiversity #camera trap data #wildlife monitoring #region-us
|
# Dataset Card for LILA
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Tutorial
- Working with Taxonomies
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: https://lila.science/
- Repository: N/A
- Paper: N/A
- Leaderboard: N/A
- Point of Contact: [email protected]
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single taxonomy.
This data set consists of only camera trap image data sets, whereas the broader LILA website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available here.
This data set is released under the Community Data License Agreement (permissive variant).
For questions about this data set, contact caltechcameratraps@URL.
If you use this data set, please cite the associated manuscript:
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.
This data set is released under the Community Data License Agreement (permissive variant).
Please cite this manuscript if you use this data set:
For questions about this data set, contact Hayder Yousif.
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the Community Data License Agreement (permissive variant).
If you use this data set, please cite the associated manuscript:
For questions about this data set, contact Hayder Yousif and Zhi Zhang.
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the Community Data License Agreement (permissive variant).
Please cite this manuscript if you use this data set:
For questions about this data set, contact northamericancameratrapimages@URL.
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the Wildlife Conservation Society. The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set on the LILA website.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
This data set is released under the Community Data License Agreement (permissive variant).
For questions about this data set, contact Victor Anton.
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact David Will at Island Conservation.
This data set is released under the Community Data License Agreement (permissive variant).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact Nathaniel Rindlaub at The Nature Conservancy.
This data set is released under the Community Data License Agreement (permissive variant).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the Snapshot Serengeti project -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available here. We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the Snapshot Karoo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the Snapshot Kgalagadi project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the Snapshot Enonkishu project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the Snapshot Camdeboo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the Snapshot Mountain Zebra project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the Snapshot Kruger project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available here.
For questions about this data set, contact Sarah Huebner at the University of Minnesota.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.
For questions about this data set, contact saolawg@URL.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying GitHub repository.
If you use these data in a publication or report, please use the following citation:
For questions about this data set, contact Juliana Velez Gomez.
This data set is released under the Community Data License Agreement (permissive variant).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The LILA taxonomy is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in COCO Camera Traps format.
All of the datasets share a common category taxonomy, which is defined on the LILA website.
### Data Fields
Different datasets may have slightly varying fields, which include:
'file_name': the file name \
'width' and 'height': the dimensions of the image \
'study': which research study the image was collected as part of \
'location' : the name of the location at which the image was taken \
'annotations': information about image annotation, which includes the taxonomy information, bounding box/boxes ('bbox'/'bboxes') if any, as well as any other annotation information. \
'image' : the 'path' to download the image and any other information that is available, e.g. its size in 'bytes'.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the LILA maintainers, since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Tutorial
The tutorial in this Google Colab notebook demonstrates how to work with this dataset, including filtering by species, collating configurations, and downloading images.
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
The original common names have been saved with their taxonomy mappings in this repository in 'common_names_to_tax.json'. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the Community Data License Agreement (permissive variant). Check the details of the specific dataset you are using in its section above.
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to @NimaBoscarino for adding this dataset.
| [
"# Dataset Card for LILA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Tutorial\n - Working with Taxonomies\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: https://lila.science/\n- Repository: N/A\n- Paper: N/A\n- Leaderboard: N/A\n- Point of Contact: [email protected]",
"### Dataset Summary\n\nLILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.\n\nThis data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single taxonomy.\n\nThis data set consists of only camera trap image data sets, whereas the broader LILA website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.\n\n\nSee below for information about each specific dataset that LILA contains:\n\n<details>\n<summary> Caltech Camera Traps </summary>\n\nThis data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.\nMore information about this data set is available here.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact caltechcameratraps@URL.\n\nIf you use this data set, please cite the associated manuscript:\n\n</details>\n\n<details>\n<summary> ENA24 </summary>\n\nThis data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\nFor questions about this data set, contact Hayder Yousif.\n\n</details>\n\n<details>\n<summary> Missouri Camera Traps </summary>\n\nThis data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nIf you use this data set, please cite the associated manuscript:\n\nFor questions about this data set, contact Hayder Yousif and Zhi Zhang.\n</details>\n\n<details>\n<summary> North American Camera Trap Images (NACTI) </summary>\n\nThis data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\n\nFor questions about this data set, contact northamericancameratrapimages@URL.\n\n</details>\n\n<details>\n<summary> WCS Camera Traps </summary>\n\nThis data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the Wildlife Conservation Society. The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.\n\nSequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set on the LILA website.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Wellington Camera Traps </summary>\n\nThis data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).\n\nIf you use this data set, please cite the associated manuscript:\n\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact Victor Anton.\n</details>\n\n<details>\n<summary> Island Conservation Camera Traps </summary>\n\nThis data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.\n\nThe most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.\n\nIn general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.\n\nFor questions about this data set, contact David Will at Island Conservation.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.\n</details>\n\n<details>\n<summary> Channel Islands Camera Traps </summary>\n\nThis data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.\n\nIf you use these data in a publication or report, please use the following citation:\n\nThe Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.\n\nFor questions about this data set, contact Nathaniel Rindlaub at The Nature Conservancy.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.\n\n</details>\n\n<details>\n<summary> Idaho Camera Traps </summary>\n\nThis data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.\n\nThe metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).\n\nImages were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.\n</details>\n\n<details>\n<summary> Snapshot Serengeti </summary>\n\nThis data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the Snapshot Serengeti project -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.\n\nLabels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available here. We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.\n\nThe images and species-level labels are described in more detail in the associated manuscript:\n\n\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Snapshot Karoo </summary>\n\nThis data set contains 14889 sequences of camera trap images, totaling 38074 images, from the Snapshot Karoo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.\n\nLabels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kgalagadi </summary>\n\nThis data set contains 3611 sequences of camera trap images, totaling 10222 images, from the Snapshot Kgalagadi project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.\n\nLabels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Enonkishu </summary>\n\nThis data set contains 13301 sequences of camera trap images, totaling 28544 images, from the Snapshot Enonkishu project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.\n\nLabels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Camdeboo </summary>\n\nThis data set contains 12132 sequences of camera trap images, totaling 30227 images, from the Snapshot Camdeboo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.\n\nLabels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Mountain Zebra </summary>\n\nThis data set contains 71688 sequences of camera trap images, totaling 73034 images, from the Snapshot Mountain Zebra project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.\n\nLabels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kruger </summary>\n\nThis data set contains 4747 sequences of camera trap images, totaling 10072 images, from the Snapshot Kruger project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.\n\nLabels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> SWG Camera Traps </summary>\n\nThis data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.\n\nThis data set is provided by the Saola Working Group; providers include:\n\n- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)\n- Asian Arks\n- Wildlife Conservation Society (Lao)\n- WWF Lao\n- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)\n- Center for Environment and Rural Development, Vinh University, Vietnam\n\nIf you use these data in a publication or report, please use the following citation:\n\nSWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.\n\nFor questions about this data set, contact saolawg@URL.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\n</details>\n\n<details>\n<summary> Orinoquia Camera Traps </summary>\n\nThis data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.\n\nThis data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.\n\nThe main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying GitHub repository.\n\nIf you use these data in a publication or report, please use the following citation:\n\nFor questions about this data set, contact Juliana Velez Gomez.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>",
"### Supported Tasks and Leaderboards\n\nNo leaderboards exist for LILA.",
"### Languages\n\nThe LILA taxonomy is provided in English.",
"## Dataset Structure",
"### Data Instances\n\nThe data annotations are provided in COCO Camera Traps format.\n\nAll of the datasets share a common category taxonomy, which is defined on the LILA website.",
"### Data Fields\n\nDifferent datasets may have slightly varying fields, which include:\n\n'file_name': the file name \\\n'width' and 'height': the dimensions of the image \\\n'study': which research study the image was collected as part of \\\n'location' : the name of the location at which the image was taken \\\n 'annotations': information about image annotation, which includes the taxonomy information, bounding box/boxes ('bbox'/'bboxes') if any, as well as any other annotation information. \\\n 'image' : the 'path' to download the image and any other information that is available, e.g. its size in 'bytes'.",
"### Data Splits\n\nThis dataset does not have a predefined train/test split.",
"## Dataset Creation",
"### Curation Rationale\n\nThe datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.",
"### Source Data",
"#### Initial data collection and normalization\n\nN/A",
"#### Who are the source language producers?\n\nN/A",
"### Annotations",
"#### Annotation process\n\nEach dataset has been annotated by the members of the project/organization that provided it.",
"#### Who are the annotators?\n\nThe annotations have been provided by domain experts in fields such as biology and ecology.",
"### Personal and Sensitive Information\n\nSome of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the LILA maintainers, since in some cases it will be possible to release those images under an alternative license.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nMachine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.",
"### Discussion of Biases\n\nThese datasets do not represent global diversity, but are examples of local ecosystems and animals.",
"### Other Known Limitations\n\nN/A",
"## Additional Information",
"### Tutorial\n\nThe tutorial in this Google Colab notebook demonstrates how to work with this dataset, including filtering by species, collating configurations, and downloading images.",
"### Working with Taxonomies\n\nAll the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the \"Caltech Camera Traps\" dataset to find all the entries with a \"felis catus\" as the species for the first annotation.\n\n\n\nThe original common names have been saved with their taxonomy mappings in this repository in 'common_names_to_tax.json'. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.\n\nThe following example loads the first \"sea turtle\" in the \"Island Conservation Camera Traps\" dataset.\n\n\n\nThe example below selects a random item from the dataset, and then maps from the taxonomy to a common name:",
"### Dataset Curators\n\nLILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.",
"### Licensing Information\n\nMany, but not all, LILA data sets were released under the Community Data License Agreement (permissive variant). Check the details of the specific dataset you are using in its section above.\n\n\n\nCitations for each dataset (if they exist) are provided in its section above.",
"### Contributions\n\nThanks to @NimaBoscarino for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #biodiversity #camera trap data #wildlife monitoring #region-us \n",
"# Dataset Card for LILA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Tutorial\n - Working with Taxonomies\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: https://lila.science/\n- Repository: N/A\n- Paper: N/A\n- Leaderboard: N/A\n- Point of Contact: [email protected]",
"### Dataset Summary\n\nLILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.\n\nThis data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single taxonomy.\n\nThis data set consists of only camera trap image data sets, whereas the broader LILA website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.\n\n\nSee below for information about each specific dataset that LILA contains:\n\n<details>\n<summary> Caltech Camera Traps </summary>\n\nThis data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.\nMore information about this data set is available here.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact caltechcameratraps@URL.\n\nIf you use this data set, please cite the associated manuscript:\n\n</details>\n\n<details>\n<summary> ENA24 </summary>\n\nThis data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\nFor questions about this data set, contact Hayder Yousif.\n\n</details>\n\n<details>\n<summary> Missouri Camera Traps </summary>\n\nThis data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nIf you use this data set, please cite the associated manuscript:\n\nFor questions about this data set, contact Hayder Yousif and Zhi Zhang.\n</details>\n\n<details>\n<summary> North American Camera Trap Images (NACTI) </summary>\n\nThis data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nPlease cite this manuscript if you use this data set:\n\n\nFor questions about this data set, contact northamericancameratrapimages@URL.\n\n</details>\n\n<details>\n<summary> WCS Camera Traps </summary>\n\nThis data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the Wildlife Conservation Society. The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.\n\nSequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set on the LILA website.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Wellington Camera Traps </summary>\n\nThis data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).\n\nIf you use this data set, please cite the associated manuscript:\n\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nFor questions about this data set, contact Victor Anton.\n</details>\n\n<details>\n<summary> Island Conservation Camera Traps </summary>\n\nThis data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.\n\nThe most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.\n\nIn general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.\n\nFor questions about this data set, contact David Will at Island Conservation.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.\n</details>\n\n<details>\n<summary> Channel Islands Camera Traps </summary>\n\nThis data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.\n\nIf you use these data in a publication or report, please use the following citation:\n\nThe Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.\n\nFor questions about this data set, contact Nathaniel Rindlaub at The Nature Conservancy.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\nThe original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.\n\n</details>\n\n<details>\n<summary> Idaho Camera Traps </summary>\n\nThis data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.\n\nThe metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).\n\nImages were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.\n</details>\n\n<details>\n<summary> Snapshot Serengeti </summary>\n\nThis data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the Snapshot Serengeti project -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.\n\nLabels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available here. We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.\n\nThe images and species-level labels are described in more detail in the associated manuscript:\n\n\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n<details>\n<summary> Snapshot Karoo </summary>\n\nThis data set contains 14889 sequences of camera trap images, totaling 38074 images, from the Snapshot Karoo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.\n\nLabels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kgalagadi </summary>\n\nThis data set contains 3611 sequences of camera trap images, totaling 10222 images, from the Snapshot Kgalagadi project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.\n\nLabels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Enonkishu </summary>\n\nThis data set contains 13301 sequences of camera trap images, totaling 28544 images, from the Snapshot Enonkishu project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.\n\nLabels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Camdeboo </summary>\n\nThis data set contains 12132 sequences of camera trap images, totaling 30227 images, from the Snapshot Camdeboo project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.\n\nLabels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Mountain Zebra </summary>\n\nThis data set contains 71688 sequences of camera trap images, totaling 73034 images, from the Snapshot Mountain Zebra project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.\n\nLabels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> Snapshot Kruger </summary>\n\nThis data set contains 4747 sequences of camera trap images, totaling 10072 images, from the Snapshot Kruger project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.\n\nLabels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available here.\n\nFor questions about this data set, contact Sarah Huebner at the University of Minnesota.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>\n\n\n<details>\n<summary> SWG Camera Traps </summary>\n\nThis data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.\n\nThis data set is provided by the Saola Working Group; providers include:\n\n- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)\n- Asian Arks\n- Wildlife Conservation Society (Lao)\n- WWF Lao\n- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)\n- Center for Environment and Rural Development, Vinh University, Vietnam\n\nIf you use these data in a publication or report, please use the following citation:\n\nSWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.\n\nFor questions about this data set, contact saolawg@URL.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n\n</details>\n\n<details>\n<summary> Orinoquia Camera Traps </summary>\n\nThis data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.\n\nThis data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.\n\nThe main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying GitHub repository.\n\nIf you use these data in a publication or report, please use the following citation:\n\nFor questions about this data set, contact Juliana Velez Gomez.\n\nThis data set is released under the Community Data License Agreement (permissive variant).\n</details>",
"### Supported Tasks and Leaderboards\n\nNo leaderboards exist for LILA.",
"### Languages\n\nThe LILA taxonomy is provided in English.",
"## Dataset Structure",
"### Data Instances\n\nThe data annotations are provided in COCO Camera Traps format.\n\nAll of the datasets share a common category taxonomy, which is defined on the LILA website.",
"### Data Fields\n\nDifferent datasets may have slightly varying fields, which include:\n\n'file_name': the file name \\\n'width' and 'height': the dimensions of the image \\\n'study': which research study the image was collected as part of \\\n'location' : the name of the location at which the image was taken \\\n 'annotations': information about image annotation, which includes the taxonomy information, bounding box/boxes ('bbox'/'bboxes') if any, as well as any other annotation information. \\\n 'image' : the 'path' to download the image and any other information that is available, e.g. its size in 'bytes'.",
"### Data Splits\n\nThis dataset does not have a predefined train/test split.",
"## Dataset Creation",
"### Curation Rationale\n\nThe datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.",
"### Source Data",
"#### Initial data collection and normalization\n\nN/A",
"#### Who are the source language producers?\n\nN/A",
"### Annotations",
"#### Annotation process\n\nEach dataset has been annotated by the members of the project/organization that provided it.",
"#### Who are the annotators?\n\nThe annotations have been provided by domain experts in fields such as biology and ecology.",
"### Personal and Sensitive Information\n\nSome of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the LILA maintainers, since in some cases it will be possible to release those images under an alternative license.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nMachine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.",
"### Discussion of Biases\n\nThese datasets do not represent global diversity, but are examples of local ecosystems and animals.",
"### Other Known Limitations\n\nN/A",
"## Additional Information",
"### Tutorial\n\nThe tutorial in this Google Colab notebook demonstrates how to work with this dataset, including filtering by species, collating configurations, and downloading images.",
"### Working with Taxonomies\n\nAll the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the \"Caltech Camera Traps\" dataset to find all the entries with a \"felis catus\" as the species for the first annotation.\n\n\n\nThe original common names have been saved with their taxonomy mappings in this repository in 'common_names_to_tax.json'. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.\n\nThe following example loads the first \"sea turtle\" in the \"Island Conservation Camera Traps\" dataset.\n\n\n\nThe example below selects a random item from the dataset, and then maps from the taxonomy to a common name:",
"### Dataset Curators\n\nLILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.",
"### Licensing Information\n\nMany, but not all, LILA data sets were released under the Community Data License Agreement (permissive variant). Check the details of the specific dataset you are using in its section above.\n\n\n\nCitations for each dataset (if they exist) are provided in its section above.",
"### Contributions\n\nThanks to @NimaBoscarino for adding this dataset."
] |
4f352870d3552163c0b4be7ee7195e1cf402f5b3 |
# Dataset Card for openpi_v2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Open PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary.
### Supported Tasks and Leaderboards
- `Task 1`: Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change)
- `Task 3`: Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations)
- `Task 4`: Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes)
- `Task 7`: Given image url, identify the visual attributes of entity and non-visual attributes of entity that change
### Languages
English
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
"goal": "goal1_text",
"steps": [
"step1_text",
"step2_text",
...
],
"topics": "topic1_annotation",
"image_urls": [
"step1_url_text",
"step2_url_text",
...
],
"states": [
{
"answers_openpiv1_metadata": {
"entity": "entity1 | entity2 | ...",
"attribute": "attribute1 | attribute2 | ...",
"answers": [
"before: step1_entity1_before | step1_entity2_before, after: step1_entity1_after | step1_entity2_after",
...
],
"modality": [
"step1_entity1_modality_id | step1_entity2_modality_id",
...
]
},
"entity": "entity1 | entity2 | ...",
"attribute": "attribute1 | attribute2 | ...",
"answers": [
"before: step1_entity1_before_merged | step1_entity2_before_merged, after: step1_entity1_after_merged | step1_entity2_after_merged",
...
]
}
]
}
```
### Data Fields
The following is an excerpt from the dataset README:
Within "goal", "steps", "topics", and "image_urls", the fields should be self-explanatory. Listed below is an explanation about those within "states":
#### Fields specific to questions:
### Data Splits
Train, Valid, Dev
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | abhinavk/openpi_v2 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-10-31T04:49:26+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["question-answering", "text-classification"], "task_ids": ["entity-linking-classification", "natural-language-inference"], "pretty_name": "openpi_v2", "tags": []} | 2022-11-07T02:23:34+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-text-classification #task_ids-entity-linking-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for openpi_v2
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Open PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary.
### Supported Tasks and Leaderboards
- 'Task 1': Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change)
- 'Task 3': Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations)
- 'Task 4': Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes)
- 'Task 7': Given image url, identify the visual attributes of entity and non-visual attributes of entity that change
### Languages
English
## Dataset Structure
### Data Instances
A typical instance in the dataset:
### Data Fields
The following is an excerpt from the dataset README:
Within "goal", "steps", "topics", and "image_urls", the fields should be self-explanatory. Listed below is an explanation about those within "states":
#### Fields specific to questions:
### Data Splits
Train, Valid, Dev
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for openpi_v2",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nOpen PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary.",
"### Supported Tasks and Leaderboards\n\n- 'Task 1': Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change) \n\n- 'Task 3': Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations) \n\n- 'Task 4': Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes) \n\n- 'Task 7': Given image url, identify the visual attributes of entity and non-visual attributes of entity that change",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nA typical instance in the dataset:",
"### Data Fields\n\nThe following is an excerpt from the dataset README:\n\nWithin \"goal\", \"steps\", \"topics\", and \"image_urls\", the fields should be self-explanatory. Listed below is an explanation about those within \"states\":",
"#### Fields specific to questions:",
"### Data Splits\n\nTrain, Valid, Dev",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_categories-text-classification #task_ids-entity-linking-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for openpi_v2",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nOpen PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary.",
"### Supported Tasks and Leaderboards\n\n- 'Task 1': Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change) \n\n- 'Task 3': Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations) \n\n- 'Task 4': Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes) \n\n- 'Task 7': Given image url, identify the visual attributes of entity and non-visual attributes of entity that change",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nA typical instance in the dataset:",
"### Data Fields\n\nThe following is an excerpt from the dataset README:\n\nWithin \"goal\", \"steps\", \"topics\", and \"image_urls\", the fields should be self-explanatory. Listed below is an explanation about those within \"states\":",
"#### Fields specific to questions:",
"### Data Splits\n\nTrain, Valid, Dev",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
0ce47b4d95b13204112fea6b36bb35847d690f35 |
# Dataset Card for MyoQuant SDH Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances and Splits](#data-instances-and-splits)
- [Dataset Creation and Annotations](#dataset-creation-and-annotations)
- [Source Data and annotation process](#source-data-and-annotation-process)
- [Who are the annotators ?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases and Limitations](#discussion-of-biases-and-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [The Team Behind this Dataset](#the-team-behind-this-dataset)
- [Partners](#partners)
## Dataset Description
- **Homepage:** https://github.com/lambda-science/MyoQuant
- **Repository:** https://huggingface.co/corentinm7/MyoQuant-SDH-Model
- **Paper:** Yet To Come
- **Leaderboard:** N/A
- **Point of Contact:** [**Corentin Meyer**, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra](https://cmeyer.fr) email: <[email protected]>
### Dataset Summary
<p align="center">
<img src="https://i.imgur.com/mzALgZL.png" alt="MyoQuant Banner" style="border-radius: 25px;" />
</p>
This dataset contains images of individual muscle fiber used to train [MyoQuant](https://github.com/lambda-science/MyoQuant) SDH Model. The goal of these data is to train a tool to classify SDH stained muscle fibers depending on the presence of mitochondria repartition anomalies. A pathological feature useful for diagnosis and classification in patient with congenital myopathies.
## Dataset Structure
### Data Instances and Splits
A total of 16 787 single muscle fiber images are in the dataset, split in three sets: train, validation and test set.
See the table for the exact count of images in each category:
| | Train (72%) | Validation (8%) | Test (20%) | TOTAL |
|---------|-------------|-----------------|------------|-------------|
| control | 9165 | 1019 | 2546 | 12730 (76%) |
| sick | 2920 | 325 | 812 | 4057 (24%) |
| TOTAL | 12085 | 1344 | 3358 | 16787 |
## Dataset Creation and Annotations
### Source Data and annotation process
To create this dataset of single muscle images, whole slide image of mice muscle fiber with SDH staining were taken from WT mice (1), BIN1 KO mice (10) and mutated DNM2 mice (7). Cells contained within these slides manually counted, labeled and classified in two categories: control (no anomaly) or sick (mitochondria anomaly) by two experts/annotators. Then all single muscle images were extracted from the image using CellPose to detect each individual cell’s boundaries. Resulting in 16787 images from 18 whole image slides.
### Who are the annotators?
All data in this dataset were generated and manually annotated by two experts:
- [**Quentin GIRAUD, PhD Student**](https://twitter.com/GiraudGiraud20) @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
- **Charlotte GINESTE, Post-Doc** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
A second pass of verification was done by:
- **Bertrand VERNAY, Platform Leader** @ [Light Microscopy Facility, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
### Personal and Sensitive Information
All image data comes from mice, there is no personal nor sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The aim of this dataset is to improve congenital myopathies diagnosis by providing tools to automatically quantify specific pathogenic features in muscle fiber histology images.
### Discussion of Biases and Limitations
This dataset has several limitations (non-exhaustive list):
- The images are from mice and thus might not be ideal to represent actual mechanism in human muscle
- The image comes only from two mice models with mutations in two genes (BIN1, DNM2) while congenital myopathies can be caused by a mutation in more than 35+ genes.
- Only mitochondria anomaly was considered to classify cells as "sick", other anomalies were not considered, thus control cells might present other anomalies (such as what is called "cores" in congenital myopathies for examples)
## Additional Information
### Licensing Information
This dataset is under the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, to ensure that what's open-source, stays open-source and available to the community.
### Citation Information
MyoQuant publication with model and data is yet to come.
## The Team Behind this Dataset
**The creator, uploader and main maintainer of this dataset, associated model and MyoQuant is:**
- **[Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra](https://cmeyer.fr) Email: <[email protected]> Github: [@lambda-science](https://github.com/lambda-science)**
Special thanks to the experts that created the data for this dataset and all the time they spend counting cells :
- **Quentin GIRAUD, PhD Student** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
- **Charlotte GINESTE, Post-Doc** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
Last but not least thanks to Bertrand Vernay being at the origin of this project:
- **Bertrand VERNAY, Platform Leader** @ [Light Microscopy Facility, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy), 1 rue Laurent Fries, 67404 Illkirch, France <[email protected]>
## Partners
<p align="center">
<img src="https://i.imgur.com/m5OGthE.png" alt="Partner Banner" style="border-radius: 25px;" />
</p>
MyoQuant-SDH-Data is born within the collaboration between the [CSTB Team @ ICube](https://cstb.icube.unistra.fr/en/index.php/Home) led by Julie D. Thompson, the [Morphological Unit of the Institute of Myology of Paris](https://www.institut-myologie.org/en/recherche-2/neuromuscular-investigation-center/morphological-unit/) led by Teresinha Evangelista, the [imagery platform MyoImage of Center of Research in Myology](https://recherche-myologie.fr/technologies/myoimage/) led by Bruno Cadot, [the photonic microscopy platform of the IGMBC](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy) led by Bertrand Vernay and the [Pathophysiology of neuromuscular diseases team @ IGBMC](https://www.igbmc.fr/en/igbmc/a-propos-de-ligbmc/directory/jocelyn-laporte) led by Jocelyn Laporte
| corentinm7/MyoQuant-SDH-Data | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:10K<n<100K",
"source_datasets:original",
"license:agpl-3.0",
"myology",
"biology",
"histology",
"muscle",
"cells",
"fibers",
"myopathy",
"SDH",
"myoquant",
"region:us"
] | 2022-10-31T08:37:20+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["agpl-3.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "SDH staining muscle fiber histology images used to train MyoQuant model.", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "control", "1": "sick"}}}}], "config_name": "SDH_16k", "splits": [{"name": "test", "num_bytes": 683067, "num_examples": 3358}, {"name": "train", "num_bytes": 2466024, "num_examples": 12085}, {"name": "validation", "num_bytes": 281243, "num_examples": 1344}], "download_size": 2257836789, "dataset_size": 3430334}, "tags": ["myology", "biology", "histology", "muscle", "cells", "fibers", "myopathy", "SDH", "myoquant"]} | 2022-11-16T18:19:23+00:00 | [] | [] | TAGS
#task_categories-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #size_categories-10K<n<100K #source_datasets-original #license-agpl-3.0 #myology #biology #histology #muscle #cells #fibers #myopathy #SDH #myoquant #region-us
| Dataset Card for MyoQuant SDH Data
==================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
* Dataset Structure
+ Data Instances and Splits
* Dataset Creation and Annotations
+ Source Data and annotation process
+ Who are the annotators ?
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases and Limitations
* Additional Information
+ Licensing Information
+ Citation Information
* The Team Behind this Dataset
* Partners
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: Yet To Come
* Leaderboard: N/A
* Point of Contact: Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra email: [URL@URL](mailto:URL@URL)
### Dataset Summary

This dataset contains images of individual muscle fiber used to train MyoQuant SDH Model. The goal of these data is to train a tool to classify SDH stained muscle fibers depending on the presence of mitochondria repartition anomalies. A pathological feature useful for diagnosis and classification in patient with congenital myopathies.
Dataset Structure
-----------------
### Data Instances and Splits
A total of 16 787 single muscle fiber images are in the dataset, split in three sets: train, validation and test set.
See the table for the exact count of images in each category:
Dataset Creation and Annotations
--------------------------------
### Source Data and annotation process
To create this dataset of single muscle images, whole slide image of mice muscle fiber with SDH staining were taken from WT mice (1), BIN1 KO mice (10) and mutated DNM2 mice (7). Cells contained within these slides manually counted, labeled and classified in two categories: control (no anomaly) or sick (mitochondria anomaly) by two experts/annotators. Then all single muscle images were extracted from the image using CellPose to detect each individual cell’s boundaries. Resulting in 16787 images from 18 whole image slides.
### Who are the annotators?
All data in this dataset were generated and manually annotated by two experts:
* Quentin GIRAUD, PhD Student @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)
* Charlotte GINESTE, Post-Doc @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [charlotte.gineste@URL](mailto:charlotte.gineste@URL)
A second pass of verification was done by:
* Bertrand VERNAY, Platform Leader @ Light Microscopy Facility, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)
### Personal and Sensitive Information
All image data comes from mice, there is no personal nor sensitive information in this dataset.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The aim of this dataset is to improve congenital myopathies diagnosis by providing tools to automatically quantify specific pathogenic features in muscle fiber histology images.
### Discussion of Biases and Limitations
This dataset has several limitations (non-exhaustive list):
* The images are from mice and thus might not be ideal to represent actual mechanism in human muscle
* The image comes only from two mice models with mutations in two genes (BIN1, DNM2) while congenital myopathies can be caused by a mutation in more than 35+ genes.
* Only mitochondria anomaly was considered to classify cells as "sick", other anomalies were not considered, thus control cells might present other anomalies (such as what is called "cores" in congenital myopathies for examples)
Additional Information
----------------------
### Licensing Information
This dataset is under the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, to ensure that what's open-source, stays open-source and available to the community.
MyoQuant publication with model and data is yet to come.
The Team Behind this Dataset
----------------------------
The creator, uploader and main maintainer of this dataset, associated model and MyoQuant is:
* Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra Email: [URL@URL](mailto:URL@URL) Github: @lambda-science
Special thanks to the experts that created the data for this dataset and all the time they spend counting cells :
* Quentin GIRAUD, PhD Student @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)
* Charlotte GINESTE, Post-Doc @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [charlotte.gineste@URL](mailto:charlotte.gineste@URL)
Last but not least thanks to Bertrand Vernay being at the origin of this project:
* Bertrand VERNAY, Platform Leader @ Light Microscopy Facility, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)
Partners
--------

MyoQuant-SDH-Data is born within the collaboration between the CSTB Team @ ICube led by Julie D. Thompson, the Morphological Unit of the Institute of Myology of Paris led by Teresinha Evangelista, the imagery platform MyoImage of Center of Research in Myology led by Bruno Cadot, the photonic microscopy platform of the IGMBC led by Bertrand Vernay and the Pathophysiology of neuromuscular diseases team @ IGBMC led by Jocelyn Laporte
| [
"### Dataset Summary\n\n\n\n\n\n\n\nThis dataset contains images of individual muscle fiber used to train MyoQuant SDH Model. The goal of these data is to train a tool to classify SDH stained muscle fibers depending on the presence of mitochondria repartition anomalies. A pathological feature useful for diagnosis and classification in patient with congenital myopathies.\n\n\nDataset Structure\n-----------------",
"### Data Instances and Splits\n\n\nA total of 16 787 single muscle fiber images are in the dataset, split in three sets: train, validation and test set. \n\nSee the table for the exact count of images in each category:\n\n\n\nDataset Creation and Annotations\n--------------------------------",
"### Source Data and annotation process\n\n\nTo create this dataset of single muscle images, whole slide image of mice muscle fiber with SDH staining were taken from WT mice (1), BIN1 KO mice (10) and mutated DNM2 mice (7). Cells contained within these slides manually counted, labeled and classified in two categories: control (no anomaly) or sick (mitochondria anomaly) by two experts/annotators. Then all single muscle images were extracted from the image using CellPose to detect each individual cell’s boundaries. Resulting in 16787 images from 18 whole image slides.",
"### Who are the annotators?\n\n\nAll data in this dataset were generated and manually annotated by two experts:\n\n\n* Quentin GIRAUD, PhD Student @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)\n* Charlotte GINESTE, Post-Doc @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [charlotte.gineste@URL](mailto:charlotte.gineste@URL)\n\n\nA second pass of verification was done by:\n\n\n* Bertrand VERNAY, Platform Leader @ Light Microscopy Facility, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)",
"### Personal and Sensitive Information\n\n\nAll image data comes from mice, there is no personal nor sensitive information in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe aim of this dataset is to improve congenital myopathies diagnosis by providing tools to automatically quantify specific pathogenic features in muscle fiber histology images.",
"### Discussion of Biases and Limitations\n\n\nThis dataset has several limitations (non-exhaustive list):\n\n\n* The images are from mice and thus might not be ideal to represent actual mechanism in human muscle\n* The image comes only from two mice models with mutations in two genes (BIN1, DNM2) while congenital myopathies can be caused by a mutation in more than 35+ genes.\n* Only mitochondria anomaly was considered to classify cells as \"sick\", other anomalies were not considered, thus control cells might present other anomalies (such as what is called \"cores\" in congenital myopathies for examples)\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThis dataset is under the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, to ensure that what's open-source, stays open-source and available to the community.\n\n\nMyoQuant publication with model and data is yet to come.\n\n\nThe Team Behind this Dataset\n----------------------------\n\n\nThe creator, uploader and main maintainer of this dataset, associated model and MyoQuant is:\n\n\n* Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra Email: [URL@URL](mailto:URL@URL) Github: @lambda-science\n\n\nSpecial thanks to the experts that created the data for this dataset and all the time they spend counting cells :\n\n\n* Quentin GIRAUD, PhD Student @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)\n* Charlotte GINESTE, Post-Doc @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [charlotte.gineste@URL](mailto:charlotte.gineste@URL)\n\n\nLast but not least thanks to Bertrand Vernay being at the origin of this project:\n\n\n* Bertrand VERNAY, Platform Leader @ Light Microscopy Facility, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)\n\n\nPartners\n--------\n\n\n\n\n\n\n\nMyoQuant-SDH-Data is born within the collaboration between the CSTB Team @ ICube led by Julie D. Thompson, the Morphological Unit of the Institute of Myology of Paris led by Teresinha Evangelista, the imagery platform MyoImage of Center of Research in Myology led by Bruno Cadot, the photonic microscopy platform of the IGMBC led by Bertrand Vernay and the Pathophysiology of neuromuscular diseases team @ IGBMC led by Jocelyn Laporte"
] | [
"TAGS\n#task_categories-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #size_categories-10K<n<100K #source_datasets-original #license-agpl-3.0 #myology #biology #histology #muscle #cells #fibers #myopathy #SDH #myoquant #region-us \n",
"### Dataset Summary\n\n\n\n\n\n\n\nThis dataset contains images of individual muscle fiber used to train MyoQuant SDH Model. The goal of these data is to train a tool to classify SDH stained muscle fibers depending on the presence of mitochondria repartition anomalies. A pathological feature useful for diagnosis and classification in patient with congenital myopathies.\n\n\nDataset Structure\n-----------------",
"### Data Instances and Splits\n\n\nA total of 16 787 single muscle fiber images are in the dataset, split in three sets: train, validation and test set. \n\nSee the table for the exact count of images in each category:\n\n\n\nDataset Creation and Annotations\n--------------------------------",
"### Source Data and annotation process\n\n\nTo create this dataset of single muscle images, whole slide image of mice muscle fiber with SDH staining were taken from WT mice (1), BIN1 KO mice (10) and mutated DNM2 mice (7). Cells contained within these slides manually counted, labeled and classified in two categories: control (no anomaly) or sick (mitochondria anomaly) by two experts/annotators. Then all single muscle images were extracted from the image using CellPose to detect each individual cell’s boundaries. Resulting in 16787 images from 18 whole image slides.",
"### Who are the annotators?\n\n\nAll data in this dataset were generated and manually annotated by two experts:\n\n\n* Quentin GIRAUD, PhD Student @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)\n* Charlotte GINESTE, Post-Doc @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [charlotte.gineste@URL](mailto:charlotte.gineste@URL)\n\n\nA second pass of verification was done by:\n\n\n* Bertrand VERNAY, Platform Leader @ Light Microscopy Facility, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)",
"### Personal and Sensitive Information\n\n\nAll image data comes from mice, there is no personal nor sensitive information in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe aim of this dataset is to improve congenital myopathies diagnosis by providing tools to automatically quantify specific pathogenic features in muscle fiber histology images.",
"### Discussion of Biases and Limitations\n\n\nThis dataset has several limitations (non-exhaustive list):\n\n\n* The images are from mice and thus might not be ideal to represent actual mechanism in human muscle\n* The image comes only from two mice models with mutations in two genes (BIN1, DNM2) while congenital myopathies can be caused by a mutation in more than 35+ genes.\n* Only mitochondria anomaly was considered to classify cells as \"sick\", other anomalies were not considered, thus control cells might present other anomalies (such as what is called \"cores\" in congenital myopathies for examples)\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThis dataset is under the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, to ensure that what's open-source, stays open-source and available to the community.\n\n\nMyoQuant publication with model and data is yet to come.\n\n\nThe Team Behind this Dataset\n----------------------------\n\n\nThe creator, uploader and main maintainer of this dataset, associated model and MyoQuant is:\n\n\n* Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra Email: [URL@URL](mailto:URL@URL) Github: @lambda-science\n\n\nSpecial thanks to the experts that created the data for this dataset and all the time they spend counting cells :\n\n\n* Quentin GIRAUD, PhD Student @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)\n* Charlotte GINESTE, Post-Doc @ Department Translational Medicine, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [charlotte.gineste@URL](mailto:charlotte.gineste@URL)\n\n\nLast but not least thanks to Bertrand Vernay being at the origin of this project:\n\n\n* Bertrand VERNAY, Platform Leader @ Light Microscopy Facility, IGBMC, CNRS UMR 7104, 1 rue Laurent Fries, 67404 Illkirch, France [URL@URL](mailto:URL@URL)\n\n\nPartners\n--------\n\n\n\n\n\n\n\nMyoQuant-SDH-Data is born within the collaboration between the CSTB Team @ ICube led by Julie D. Thompson, the Morphological Unit of the Institute of Myology of Paris led by Teresinha Evangelista, the imagery platform MyoImage of Center of Research in Myology led by Bruno Cadot, the photonic microscopy platform of the IGMBC led by Bertrand Vernay and the Pathophysiology of neuromuscular diseases team @ IGBMC led by Jocelyn Laporte"
] |
892faabeccc027ec862b3889a6cb232ea04d4558 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-083d71a4-50b6-4074-aa7d-a46eddb83f06-42 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T09:10:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-31T09:11:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
50549635611eefc47cc7852b05fa7838e6b32ea3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-fe056b5c-7e36-4094-b3f2-84d1fbaaf77c-53 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T09:25:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-31T09:25:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
08d5a56fbbfbd8f8e7c6372cfb2f43159388f872 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-6da44258-8968-4823-8933-3375e1cfee89-64 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T10:45:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-10-31T10:45:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
35619762a828711029111dac816e3be6bfb33059 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-0d3aacb2-653b-459b-af2f-2d90d5362791-75 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T11:00:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-10-31T11:00:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
66839876b5ad5337aa11c89d71db04f3e1e2ff15 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-95ce44b7-7684-4cf4-b396-d486367937e4-86 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T11:29:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-10-31T11:29:54+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
4e0cf3f26014b3ececa0fe89260099593caeb3c0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-f69c187c-a1f8-462d-8272-41a77bd1f8ed-97 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T11:32:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-10-31T11:32:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
c8287a1fdc3bb36bdbc84293a1a34cf4ee5384c5 | # positive-reframing-ptbr-dataset
Version translated into pt-br of the dataset available in the work ["Inducing Positive Perspectives with Text Reframing"](https://arxiv.org/abs/2204.02952). Used in model [positive-reframing-ptbr](https://huggingface.co/dominguesm/positive-reframing-ptbr).
**Citation:**
> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.
**BibTeX:**
```tex
@inproceedings{ziems-etal-2022-positive-frames,
title = "Inducing Positive Perspectives with Text Reframing",
author = "Ziems, Caleb and
Li, Minzhi and
Zhang, Anthony and
Yang, Diyi",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
address = "Online and Dublin, Ireland",
publisher = "Association for Computational Linguistics"
}
``` | dominguesm/positive-reframing-ptbr-dataset | [
"arxiv:2204.02952",
"region:us"
] | 2022-10-31T12:17:25+00:00 | {"dataset_info": {"features": [{"name": "original_text", "dtype": "string"}, {"name": "reframed_text", "dtype": "string"}, {"name": "strategy", "dtype": "string"}, {"name": "strategy_original_text", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 318805, "num_examples": 835}, {"name": "test", "num_bytes": 321952, "num_examples": 835}, {"name": "train", "num_bytes": 2586935, "num_examples": 6679}], "download_size": 1845244, "dataset_size": 3227692}} | 2022-10-31T12:43:59+00:00 | [
"2204.02952"
] | [] | TAGS
#arxiv-2204.02952 #region-us
| # positive-reframing-ptbr-dataset
Version translated into pt-br of the dataset available in the work "Inducing Positive Perspectives with Text Reframing". Used in model positive-reframing-ptbr.
Citation:
> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.
BibTeX:
| [
"# positive-reframing-ptbr-dataset\n\nVersion translated into pt-br of the dataset available in the work \"Inducing Positive Perspectives with Text Reframing\". Used in model positive-reframing-ptbr.\n\n\nCitation:\n\n> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.\n\nBibTeX:"
] | [
"TAGS\n#arxiv-2204.02952 #region-us \n",
"# positive-reframing-ptbr-dataset\n\nVersion translated into pt-br of the dataset available in the work \"Inducing Positive Perspectives with Text Reframing\". Used in model positive-reframing-ptbr.\n\n\nCitation:\n\n> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.\n\nBibTeX:"
] |
eb792fb79d79a7e3b3b12eaea26dfb5a6ec23deb | # Dataset Card for "FoodBase"
Dataset for FoodBase corpus introduced in [this paper](https://academic.oup.com/database/article/doi/10.1093/database/baz121/5611291).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dizex/FoodBase | [
"region:us"
] | 2022-10-31T12:42:55+00:00 | {"dataset_info": {"features": [{"name": "nltk_tokens", "sequence": "string"}, {"name": "iob_tags", "sequence": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2040036, "num_examples": 600}, {"name": "val", "num_bytes": 662190, "num_examples": 200}], "download_size": 353747, "dataset_size": 2702226}} | 2022-10-31T12:48:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "FoodBase"
Dataset for FoodBase corpus introduced in this paper.
More Information needed | [
"# Dataset Card for \"FoodBase\"\n\nDataset for FoodBase corpus introduced in this paper.\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"FoodBase\"\n\nDataset for FoodBase corpus introduced in this paper.\n\nMore Information needed"
] |
5f56df48ab1ed088c122e2d73cd696e66e22e8e2 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
200000/200000/100000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v2 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-10-31T14:28:04+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-11-01T13:14:38+00:00 | [] | [
"pt"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us
| # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
200000/200000/100000
### Contributions
@rufimelo99
| [
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE\nExtended version of rufimelo/PortugueseLegalSentences-v1\n\n200000/200000/100000",
"### Contributions\n@rufimelo99"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us \n",
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE\nExtended version of rufimelo/PortugueseLegalSentences-v1\n\n200000/200000/100000",
"### Contributions\n@rufimelo99"
] |
3bddddbe0ef0f314a548753b200ec3e681492a8e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SiraH/bert-finetuned-squad
* Dataset: subjqa
* Config: grocery
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sushant-joshi](https://huggingface.co/sushant-joshi) for evaluating this model. | autoevaluate/autoeval-eval-subjqa-grocery-9dee2c-1945965520 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-31T14:45:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["subjqa"], "eval_info": {"task": "extractive_question_answering", "model": "SiraH/bert-finetuned-squad", "metrics": [], "dataset_name": "subjqa", "dataset_config": "grocery", "dataset_split": "train", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-31T14:45:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: SiraH/bert-finetuned-squad
* Dataset: subjqa
* Config: grocery
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sushant-joshi for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: SiraH/bert-finetuned-squad\n* Dataset: subjqa\n* Config: grocery\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sushant-joshi for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: SiraH/bert-finetuned-squad\n* Dataset: subjqa\n* Config: grocery\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sushant-joshi for evaluating this model."
] |
db95ae658758c7b2337a54a2facabefe3af9698a | # Dataset Card for "cartoon-blip-captions"
| Norod78/cartoon-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-31T14:48:15+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Cartoon BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190959102.953, "num_examples": 3141}], "download_size": 190279356, "dataset_size": 190959102.953}, "tags": []} | 2022-11-09T16:27:57+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #region-us
| # Dataset Card for "cartoon-blip-captions"
| [
"# Dataset Card for \"cartoon-blip-captions\""
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for \"cartoon-blip-captions\""
] |
17d5b9dafdaa266f17aedfaa0154fe56411cdb44 | # Dataset Card for "Arabic_SQuAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
---
# Citation
```
@inproceedings{mozannar-etal-2019-neural,
title = "Neural {A}rabic Question Answering",
author = "Mozannar, Hussein and
Maamary, Elie and
El Hajal, Karl and
Hajj, Hazem",
booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4612",
doi = "10.18653/v1/W19-4612",
pages = "108--118",
abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.",
}
```
--- | Mostafa3zazi/Arabic_SQuAD | [
"region:us"
] | 2022-10-31T19:16:37+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "c_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 61868003, "num_examples": 48344}], "download_size": 10512179, "dataset_size": 61868003}} | 2022-10-31T19:32:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Arabic_SQuAD"
More Information needed
---
--- | [
"# Dataset Card for \"Arabic_SQuAD\"\n\nMore Information needed\n\n---\n---"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Arabic_SQuAD\"\n\nMore Information needed\n\n---\n---"
] |
3857e5ae2a3357a65605cce3d8314a3570371cbb |
A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
Art Styles
* "**artwork style**": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**artwork style**": 4200 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
* "**artwork style**": 2750 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE.
* "**illustration style**": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**erotic photography**": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**landscape photography**": 2500 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "b&w, text" was also used for this dataset.
People
* "**person**": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**woman**": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**guy**": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**supermodel**": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**bikini model**": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy athlete**": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**femme fatale**": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy man**": 3505 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy woman**": 3500 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Animals
* "**kitty**": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**cat**": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Vehicles
* "**fighter jet**": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**train**": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**car**": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Themes
* "**cyberpunk**": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
I used the "Generate Forever" feature in [AUTOMATIC1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different.
You can use my regularization / class image datasets with: https://github.com/ShivamShrirao/diffusers, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, https://github.com/TheLastBen/fast-stable-diffusion, and any other DreamBooth projects that have support for prior preservation loss.
| ProGamerGov/StableDiffusion-v1-5-Regularization-Images | [
"license:mit",
"image-text-dataset",
"synthetic-dataset",
"region:us"
] | 2022-10-31T22:21:09+00:00 | {"license": "mit", "tags": ["image-text-dataset", "synthetic-dataset"]} | 2023-11-18T20:46:01+00:00 | [] | [] | TAGS
#license-mit #image-text-dataset #synthetic-dataset #region-us
|
A collection of regularization / class instance datasets for the Stable Diffusion v1-5 model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the stabilityai/sd-vae-ft-mse VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
Art Styles
* "artwork style": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "artwork style": 4200 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
* "artwork style": 2750 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE.
* "illustration style": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "erotic photography": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "landscape photography": 2500 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "b&w, text" was also used for this dataset.
People
* "person": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "woman": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "guy": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "supermodel": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "bikini model": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "sexy athlete": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "femme fatale": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "sexy man": 3505 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "sexy woman": 3500 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Animals
* "kitty": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "cat": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Vehicles
* "fighter jet": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "train": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "car": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Themes
* "cyberpunk": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
I used the "Generate Forever" feature in AUTOMATIC1111's WebUI to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different.
You can use my regularization / class image datasets with: URL URL URL and any other DreamBooth projects that have support for prior preservation loss.
| [] | [
"TAGS\n#license-mit #image-text-dataset #synthetic-dataset #region-us \n"
] |
3bc134f4be0eb287bca607e529ef11f06b7cea62 | My initial attempt at creating a dataset intended to create a customized model to include Ruby. | digiSilk/real_ruby | [
"region:us"
] | 2022-10-31T23:49:43+00:00 | {} | 2022-11-01T00:06:52+00:00 | [] | [] | TAGS
#region-us
| My initial attempt at creating a dataset intended to create a customized model to include Ruby. | [] | [
"TAGS\n#region-us \n"
] |
a189a9d3742ff9a42941b536305cd77221d3262b | # BioNLP2021 dataset (Task2)
___
Data fields:
* text (str): source text; Section and Article (train_mul subset only) are separated by <SAS> ; Single Documents are separated by <DOC> ; Sentences are separated by <SS>
* summ_abs, summ_ext (str): abstractive and extractive summarization, whose Sentences are separated by <SS>
* question (str): question, whose Sentences are separated by <SS>
* key (str): key in the origin dataset (for submitting) | nbtpj/BioNLP2021 | [
"region:us"
] | 2022-11-01T01:51:49+00:00 | {} | 2023-01-02T02:11:44+00:00 | [] | [] | TAGS
#region-us
| # BioNLP2021 dataset (Task2)
___
Data fields:
* text (str): source text; Section and Article (train_mul subset only) are separated by <SAS> ; Single Documents are separated by <DOC> ; Sentences are separated by <SS>
* summ_abs, summ_ext (str): abstractive and extractive summarization, whose Sentences are separated by <SS>
* question (str): question, whose Sentences are separated by <SS>
* key (str): key in the origin dataset (for submitting) | [
"# BioNLP2021 dataset (Task2)\n___\n\nData fields:\n* text (str): source text; Section and Article (train_mul subset only) are separated by <SAS> ; Single Documents are separated by <DOC> ; Sentences are separated by <SS>\n* summ_abs, summ_ext (str): abstractive and extractive summarization, whose Sentences are separated by <SS>\n* question (str): question, whose Sentences are separated by <SS>\n* key (str): key in the origin dataset (for submitting)"
] | [
"TAGS\n#region-us \n",
"# BioNLP2021 dataset (Task2)\n___\n\nData fields:\n* text (str): source text; Section and Article (train_mul subset only) are separated by <SAS> ; Single Documents are separated by <DOC> ; Sentences are separated by <SS>\n* summ_abs, summ_ext (str): abstractive and extractive summarization, whose Sentences are separated by <SS>\n* question (str): question, whose Sentences are separated by <SS>\n* key (str): key in the origin dataset (for submitting)"
] |
4d005b3e1a5f1e558bf1e53ba4d4c6835c9fc667 | # Dataset Card for "text_summarization_dataset1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset1 | [
"region:us"
] | 2022-11-01T02:13:04+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 129017829, "num_examples": 106525}], "download_size": 43557623, "dataset_size": 129017829}} | 2022-11-01T02:13:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "text_summarization_dataset1"
More Information needed | [
"# Dataset Card for \"text_summarization_dataset1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_summarization_dataset1\"\n\nMore Information needed"
] |
55b0bfdf562703f905a60e4522bb56547c7406e8 | # Dataset Card for "text_summarization_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shahidul034/text_summarization_dataset2 | [
"region:us"
] | 2022-11-01T02:14:42+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 125954432, "num_examples": 105252}], "download_size": 42217690, "dataset_size": 125954432}} | 2022-11-01T02:14:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "text_summarization_dataset2"
More Information needed | [
"# Dataset Card for \"text_summarization_dataset2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"text_summarization_dataset2\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.